2016-03-13T08:51:51.631+0000 I SHARDING [conn788] updating metadata for mydomain.Sessions from shard version 30540|0||563ba74d869758a4542b9075 to shard version 30541|0||563ba74d869758a4542b9075 2016-03-13T08:51:51.631+0000 I SHARDING [conn788] collection version was loaded at version 30541|1||563ba74d869758a4542b9075, took 48ms 2016-03-13T08:51:51.633+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da23aca83e820e42945d61') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da5345584ad15e53727597') } for collection mydomain.Sessions from rsmydomain3/in.db3m1.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T08:51:51.635+0000 I NETWORK [initandlisten] connection accepted from 10.0.167.43:37043 #789 (343 connections now open) 2016-03-13T08:51:51.635+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da23aca83e820e42945d61') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da5345584ad15e53727597') }, with opId: 58782152 2016-03-13T08:51:51.635+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da23aca83e820e42945d61') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da5345584ad15e53727597') } 2016-03-13T08:51:51.639+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T08:51:51.639+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da23aca83e820e42945d61') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da5345584ad15e53727597') } 2016-03-13T08:51:51.673+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da23aca83e820e42945d61') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da5345584ad15e53727597') } 2016-03-13T08:51:51.747+0000 I NETWORK [initandlisten] connection accepted from 10.0.167.43:37044 #790 (344 connections now open) 2016-03-13T08:51:51.757+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da23aca83e820e42945d61') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da5345584ad15e53727597') } 2016-03-13T08:51:51.757+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da23aca83e820e42945d61') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da5345584ad15e53727597') } 2016-03-13T08:51:51.757+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T08:51:51.757+0000-56e52a2722724ad62c347c1f", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457859111757), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da23aca83e820e42945d61') }, max: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55da5345584ad15e53727597') }, step 1 of 5: 1, step 2 of 5: 3, step 3 of 5: 0, step 4 of 5: 0, step 5 of 5: 118, note: "success" } } 2016-03-13T08:51:57.917+0000 I SHARDING [conn788] remotely refreshing metadata for mydomain.Sessions based on current shard version 30541|0||563ba74d869758a4542b9075, current metadata version is 30541|1||563ba74d869758a4542b9075 2016-03-13T08:51:57.965+0000 I SHARDING [conn788] updating metadata for mydomain.Sessions from shard version 30541|0||563ba74d869758a4542b9075 to shard version 30542|0||563ba74d869758a4542b9075 2016-03-13T08:51:57.965+0000 I SHARDING [conn788] collection version was loaded at version 30542|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T08:51:57.967+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55db4df25398761c47f0778c') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dbe6382838be70f5a909f2') } for collection mydomain.Sessions from rsmydomain3/in.db3m1.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T08:51:57.968+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55db4df25398761c47f0778c') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dbe6382838be70f5a909f2') }, with opId: 58782259 2016-03-13T08:51:57.968+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55db4df25398761c47f0778c') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dbe6382838be70f5a909f2') } 2016-03-13T08:51:57.972+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T08:51:57.972+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55db4df25398761c47f0778c') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dbe6382838be70f5a909f2') } 2016-03-13T08:51:57.972+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55db4df25398761c47f0778c') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dbe6382838be70f5a909f2') } 2016-03-13T08:51:58.028+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55db4df25398761c47f0778c') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dbe6382838be70f5a909f2') } 2016-03-13T08:51:58.028+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55db4df25398761c47f0778c') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dbe6382838be70f5a909f2') } 2016-03-13T08:51:58.028+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T08:51:58.028+0000-56e52a2e22724ad62c347c20", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457859118028), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55db4df25398761c47f0778c') }, max: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dbe6382838be70f5a909f2') }, step 1 of 5: 0, step 2 of 5: 3, step 3 of 5: 0, step 4 of 5: 0, step 5 of 5: 56, note: "success" } } 2016-03-13T08:52:00.474+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:52:00.295+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T08:52:03.761+0000 I SHARDING [conn788] remotely refreshing metadata for mydomain.Sessions based on current shard version 30542|0||563ba74d869758a4542b9075, current metadata version is 30542|1||563ba74d869758a4542b9075 2016-03-13T08:52:03.809+0000 I SHARDING [conn788] updating metadata for mydomain.Sessions from shard version 30542|0||563ba74d869758a4542b9075 to shard version 30543|0||563ba74d869758a4542b9075 2016-03-13T08:52:03.809+0000 I SHARDING [conn788] collection version was loaded at version 30543|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T08:52:03.811+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dcb45e2838be70f77ca0f4') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dd81b02838be70f7919733') } for collection mydomain.Sessions from rsmydomain3/in.db3m1.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T08:52:03.811+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dcb45e2838be70f77ca0f4') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dd81b02838be70f7919733') }, with opId: 58782358 2016-03-13T08:52:03.812+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dcb45e2838be70f77ca0f4') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dd81b02838be70f7919733') } 2016-03-13T08:52:03.815+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T08:52:03.815+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dcb45e2838be70f77ca0f4') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dd81b02838be70f7919733') } 2016-03-13T08:52:03.816+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dcb45e2838be70f77ca0f4') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dd81b02838be70f7919733') } 2016-03-13T08:52:03.872+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dcb45e2838be70f77ca0f4') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dd81b02838be70f7919733') } 2016-03-13T08:52:03.872+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dcb45e2838be70f77ca0f4') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dd81b02838be70f7919733') } 2016-03-13T08:52:03.872+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T08:52:03.872+0000-56e52a3322724ad62c347c21", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457859123872), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dcb45e2838be70f77ca0f4') }, max: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55dd81b02838be70f7919733') }, step 1 of 5: 0, step 2 of 5: 3, step 3 of 5: 0, step 4 of 5: 0, step 5 of 5: 56, note: "success" } } 2016-03-13T08:52:09.689+0000 I SHARDING [conn788] remotely refreshing metadata for mydomain.Sessions based on current shard version 30543|0||563ba74d869758a4542b9075, current metadata version is 30543|1||563ba74d869758a4542b9075 2016-03-13T08:52:09.737+0000 I SHARDING [conn788] updating metadata for mydomain.Sessions from shard version 30543|0||563ba74d869758a4542b9075 to shard version 30544|0||563ba74d869758a4542b9075 2016-03-13T08:52:09.737+0000 I SHARDING [conn788] collection version was loaded at version 30544|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T08:52:09.739+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55de16d215c6720e355a899b') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55ded57ac2e0054cfc342d63') } for collection mydomain.Sessions from rsmydomain3/in.db3m1.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T08:52:09.740+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55de16d215c6720e355a899b') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55ded57ac2e0054cfc342d63') }, with opId: 58782456 2016-03-13T08:52:09.740+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55de16d215c6720e355a899b') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55ded57ac2e0054cfc342d63') } 2016-03-13T08:52:09.744+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T08:52:09.744+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55de16d215c6720e355a899b') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55ded57ac2e0054cfc342d63') } 2016-03-13T08:52:09.744+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55de16d215c6720e355a899b') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55ded57ac2e0054cfc342d63') } 2016-03-13T08:52:09.800+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55de16d215c6720e355a899b') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55ded57ac2e0054cfc342d63') } 2016-03-13T08:52:09.800+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55de16d215c6720e355a899b') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55ded57ac2e0054cfc342d63') } 2016-03-13T08:52:09.800+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T08:52:09.800+0000-56e52a3922724ad62c347c22", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457859129800), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55de16d215c6720e355a899b') }, max: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55ded57ac2e0054cfc342d63') }, step 1 of 5: 0, step 2 of 5: 3, step 3 of 5: 0, step 4 of 5: 0, step 5 of 5: 56, note: "success" } } 2016-03-13T08:52:15.564+0000 I SHARDING [conn788] remotely refreshing metadata for mydomain.Sessions based on current shard version 30544|0||563ba74d869758a4542b9075, current metadata version is 30544|1||563ba74d869758a4542b9075 2016-03-13T08:52:15.612+0000 I SHARDING [conn788] updating metadata for mydomain.Sessions from shard version 30544|0||563ba74d869758a4542b9075 to shard version 30545|0||563ba74d869758a4542b9075 2016-03-13T08:52:15.612+0000 I SHARDING [conn788] collection version was loaded at version 30545|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T08:52:15.614+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55df58a473d24e2fba014476') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e02e668402cc0e48f230df') } for collection mydomain.Sessions from rsmydomain3/in.db3m1.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T08:52:15.615+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55df58a473d24e2fba014476') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e02e668402cc0e48f230df') }, with opId: 58782558 2016-03-13T08:52:15.615+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55df58a473d24e2fba014476') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e02e668402cc0e48f230df') } 2016-03-13T08:52:15.618+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T08:52:15.619+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55df58a473d24e2fba014476') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e02e668402cc0e48f230df') } 2016-03-13T08:52:15.619+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55df58a473d24e2fba014476') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e02e668402cc0e48f230df') } 2016-03-13T08:52:15.675+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55df58a473d24e2fba014476') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e02e668402cc0e48f230df') } 2016-03-13T08:52:15.675+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55df58a473d24e2fba014476') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e02e668402cc0e48f230df') } 2016-03-13T08:52:15.675+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T08:52:15.675+0000-56e52a3f22724ad62c347c23", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457859135675), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55df58a473d24e2fba014476') }, max: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e02e668402cc0e48f230df') }, step 1 of 5: 0, step 2 of 5: 3, step 3 of 5: 0, step 4 of 5: 0, step 5 of 5: 56, note: "success" } } 2016-03-13T08:52:21.893+0000 I SHARDING [conn788] remotely refreshing metadata for mydomain.Sessions based on current shard version 30545|0||563ba74d869758a4542b9075, current metadata version is 30545|1||563ba74d869758a4542b9075 2016-03-13T08:52:21.941+0000 I SHARDING [conn788] updating metadata for mydomain.Sessions from shard version 30545|0||563ba74d869758a4542b9075 to shard version 30546|0||563ba74d869758a4542b9075 2016-03-13T08:52:21.941+0000 I SHARDING [conn788] collection version was loaded at version 30546|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T08:52:21.943+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e0afac8402cc0e480169a2') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e19c218402cc0e4819d03d') } for collection mydomain.Sessions from rsmydomain3/in.db3m1.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T08:52:21.944+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e0afac8402cc0e480169a2') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e19c218402cc0e4819d03d') }, with opId: 58782662 2016-03-13T08:52:21.944+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e0afac8402cc0e480169a2') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e19c218402cc0e4819d03d') } 2016-03-13T08:52:21.948+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T08:52:21.948+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e0afac8402cc0e480169a2') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e19c218402cc0e4819d03d') } 2016-03-13T08:52:21.948+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e0afac8402cc0e480169a2') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e19c218402cc0e4819d03d') } 2016-03-13T08:52:22.004+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e0afac8402cc0e480169a2') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e19c218402cc0e4819d03d') } 2016-03-13T08:52:22.004+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e0afac8402cc0e480169a2') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e19c218402cc0e4819d03d') } 2016-03-13T08:52:22.004+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T08:52:22.004+0000-56e52a4522724ad62c347c24", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457859142004), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e0afac8402cc0e480169a2') }, max: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e19c218402cc0e4819d03d') }, step 1 of 5: 0, step 2 of 5: 3, step 3 of 5: 0, step 4 of 5: 0, step 5 of 5: 56, note: "success" } } 2016-03-13T08:52:27.963+0000 I SHARDING [conn788] remotely refreshing metadata for mydomain.Sessions based on current shard version 30546|0||563ba74d869758a4542b9075, current metadata version is 30546|1||563ba74d869758a4542b9075 2016-03-13T08:52:28.011+0000 I SHARDING [conn788] updating metadata for mydomain.Sessions from shard version 30546|0||563ba74d869758a4542b9075 to shard version 30547|0||563ba74d869758a4542b9075 2016-03-13T08:52:28.012+0000 I SHARDING [conn788] collection version was loaded at version 30547|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T08:52:28.014+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1cfd51524090e3716b143') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1f0e0584ad1085721c2c1') } for collection mydomain.Sessions from rsmydomain3/in.db3m1.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T08:52:28.014+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1cfd51524090e3716b143') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1f0e0584ad1085721c2c1') }, with opId: 58782764 2016-03-13T08:52:28.015+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1cfd51524090e3716b143') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1f0e0584ad1085721c2c1') } 2016-03-13T08:52:28.018+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T08:52:28.018+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1cfd51524090e3716b143') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1f0e0584ad1085721c2c1') } 2016-03-13T08:52:28.018+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1cfd51524090e3716b143') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1f0e0584ad1085721c2c1') } 2016-03-13T08:52:28.074+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1cfd51524090e3716b143') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1f0e0584ad1085721c2c1') } 2016-03-13T08:52:28.075+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1cfd51524090e3716b143') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1f0e0584ad1085721c2c1') } 2016-03-13T08:52:28.075+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T08:52:28.075+0000-56e52a4c22724ad62c347c25", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457859148075), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1cfd51524090e3716b143') }, max: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e1f0e0584ad1085721c2c1') }, step 1 of 5: 0, step 2 of 5: 3, step 3 of 5: 0, step 4 of 5: 0, step 5 of 5: 56, note: "success" } } 2016-03-13T08:52:30.630+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:52:30.475+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T08:52:36.377+0000 I SHARDING [conn788] remotely refreshing metadata for mydomain.Sessions based on current shard version 30547|0||563ba74d869758a4542b9075, current metadata version is 30547|1||563ba74d869758a4542b9075 2016-03-13T08:52:36.425+0000 I SHARDING [conn788] updating metadata for mydomain.Sessions from shard version 30547|0||563ba74d869758a4542b9075 to shard version 30548|0||563ba74d869758a4542b9075 2016-03-13T08:52:36.425+0000 I SHARDING [conn788] collection version was loaded at version 30548|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T08:52:36.427+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e212065a5f710e30361352') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e22bca16c3400e2cbcabeb') } for collection mydomain.Sessions from rsmydomain3/in.db3m1.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T08:52:36.428+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e212065a5f710e30361352') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e22bca16c3400e2cbcabeb') }, with opId: 58782902 2016-03-13T08:52:36.428+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e212065a5f710e30361352') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e22bca16c3400e2cbcabeb') } 2016-03-13T08:52:36.432+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T08:52:36.432+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e212065a5f710e30361352') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e22bca16c3400e2cbcabeb') } 2016-03-13T08:52:36.432+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e212065a5f710e30361352') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e22bca16c3400e2cbcabeb') } 2016-03-13T08:52:36.487+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e212065a5f710e30361352') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e22bca16c3400e2cbcabeb') } 2016-03-13T08:52:36.488+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e212065a5f710e30361352') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e22bca16c3400e2cbcabeb') } 2016-03-13T08:52:36.488+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T08:52:36.488+0000-56e52a5422724ad62c347c26", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457859156488), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e212065a5f710e30361352') }, max: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e22bca16c3400e2cbcabeb') }, step 1 of 5: 0, step 2 of 5: 3, step 3 of 5: 0, step 4 of 5: 0, step 5 of 5: 56, note: "success" } } 2016-03-13T08:52:42.066+0000 I SHARDING [conn788] remotely refreshing metadata for mydomain.Sessions based on current shard version 30548|0||563ba74d869758a4542b9075, current metadata version is 30548|1||563ba74d869758a4542b9075 2016-03-13T08:52:42.113+0000 I SHARDING [conn788] updating metadata for mydomain.Sessions from shard version 30548|0||563ba74d869758a4542b9075 to shard version 30549|0||563ba74d869758a4542b9075 2016-03-13T08:52:42.113+0000 I SHARDING [conn788] collection version was loaded at version 30549|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T08:52:42.115+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e2c227584ad10857387fea') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e32ca03a3b3f0e39a38b95') } for collection mydomain.Sessions from rsmydomain3/in.db3m1.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T08:52:42.116+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e2c227584ad10857387fea') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e32ca03a3b3f0e39a38b95') }, with opId: 58782997 2016-03-13T08:52:42.116+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e2c227584ad10857387fea') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e32ca03a3b3f0e39a38b95') } 2016-03-13T08:52:42.120+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T08:52:42.120+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e2c227584ad10857387fea') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e32ca03a3b3f0e39a38b95') } 2016-03-13T08:52:42.121+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e2c227584ad10857387fea') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e32ca03a3b3f0e39a38b95') } 2016-03-13T08:52:42.176+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e2c227584ad10857387fea') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e32ca03a3b3f0e39a38b95') } 2016-03-13T08:52:42.176+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e2c227584ad10857387fea') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e32ca03a3b3f0e39a38b95') } 2016-03-13T08:52:42.177+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T08:52:42.177+0000-56e52a5a22724ad62c347c27", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457859162177), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e2c227584ad10857387fea') }, max: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e32ca03a3b3f0e39a38b95') }, step 1 of 5: 0, step 2 of 5: 3, step 3 of 5: 0, step 4 of 5: 0, step 5 of 5: 56, note: "success" } } 2016-03-13T08:52:47.990+0000 I SHARDING [conn788] remotely refreshing metadata for mydomain.Sessions based on current shard version 30549|0||563ba74d869758a4542b9075, current metadata version is 30549|1||563ba74d869758a4542b9075 2016-03-13T08:52:48.037+0000 I SHARDING [conn788] updating metadata for mydomain.Sessions from shard version 30549|0||563ba74d869758a4542b9075 to shard version 30550|0||563ba74d869758a4542b9075 2016-03-13T08:52:48.037+0000 I SHARDING [conn788] collection version was loaded at version 30550|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T08:52:48.039+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e357a8584ad108596fb25f') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e3cca6584ad108597c4cc3') } for collection mydomain.Sessions from rsmydomain3/in.db3m1.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T08:52:48.040+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e357a8584ad108596fb25f') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e3cca6584ad108597c4cc3') }, with opId: 58783096 2016-03-13T08:52:48.040+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e357a8584ad108596fb25f') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e3cca6584ad108597c4cc3') } 2016-03-13T08:52:48.045+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T08:52:48.045+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e357a8584ad108596fb25f') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e3cca6584ad108597c4cc3') } 2016-03-13T08:52:48.113+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e357a8584ad108596fb25f') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e3cca6584ad108597c4cc3') } 2016-03-13T08:52:48.227+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e357a8584ad108596fb25f') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e3cca6584ad108597c4cc3') } 2016-03-13T08:52:48.227+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e357a8584ad108596fb25f') } -> { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e3cca6584ad108597c4cc3') } 2016-03-13T08:52:48.227+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T08:52:48.227+0000-56e52a6022724ad62c347c28", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457859168227), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e357a8584ad108596fb25f') }, max: { a: ObjectId('5211e594645cff218c1b662b'), _id: ObjectId('55e3cca6584ad108597c4cc3') }, step 1 of 5: 0, step 2 of 5: 3, step 3 of 5: 0, step 4 of 5: 0, step 5 of 5: 182, note: "success" } } 2016-03-13T08:53:00.769+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:53:00.631+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T08:53:31.038+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:53:30.770+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T08:54:01.486+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:54:01.038+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T08:54:31.737+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:54:31.486+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T08:55:02.009+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:55:01.737+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T08:55:32.283+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:55:32.010+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T08:56:02.492+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:56:02.283+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T08:56:32.676+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:56:32.493+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T08:57:02.899+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:57:02.677+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T08:57:33.047+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:57:32.899+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T08:58:03.197+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:58:03.047+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T08:58:33.467+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:58:33.197+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T08:59:03.612+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:59:03.468+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T08:59:33.810+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T08:59:33.612+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:00:05.464+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:00:03.810+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:00:35.728+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:00:35.464+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:01:07.856+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:01:05.728+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:01:38.007+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:01:37.856+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:02:08.266+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:02:08.008+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:02:38.525+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:02:38.267+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:03:08.716+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:03:08.525+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:03:38.985+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:03:38.716+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:04:09.142+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:04:08.986+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:04:39.430+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:04:39.142+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:05:09.615+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:05:09.430+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:05:39.874+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:05:39.616+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:06:10.146+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:06:09.874+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:06:40.294+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:06:40.146+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:07:10.485+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:07:10.294+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:07:40.771+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:07:40.485+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:08:11.037+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:08:10.771+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:08:41.285+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:08:41.038+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:09:11.542+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:09:11.285+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:09:43.417+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:09:41.542+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:10:13.667+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:10:13.417+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:10:13.725+0000 I SHARDING [conn143] remotely refreshing metadata for mydomain.Sessions based on current shard version 30550|0||563ba74d869758a4542b9075, current metadata version is 30550|1||563ba74d869758a4542b9075 2016-03-13T09:10:13.775+0000 I SHARDING [conn143] updating metadata for mydomain.Sessions from shard version 30550|0||563ba74d869758a4542b9075 to shard version 30551|0||563ba74d869758a4542b9075 2016-03-13T09:10:13.775+0000 I SHARDING [conn143] collection version was loaded at version 30620|1||563ba74d869758a4542b9075, took 49ms 2016-03-13T09:10:13.777+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56342fbc7bc3500e96202352') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563430eea2a4330ea41e7d4d') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:10:13.799+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56342fbc7bc3500e96202352') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563430eea2a4330ea41e7d4d') }, with opId: 58800332 2016-03-13T09:10:13.799+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56342fbc7bc3500e96202352') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563430eea2a4330ea41e7d4d') } 2016-03-13T09:10:20.417+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:10:21.417+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:10:21.417+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56342fbc7bc3500e96202352') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563430eea2a4330ea41e7d4d') } 2016-03-13T09:10:21.418+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56342fbc7bc3500e96202352') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563430eea2a4330ea41e7d4d') } 2016-03-13T09:10:22.242+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56342fbc7bc3500e96202352') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563430eea2a4330ea41e7d4d') } 2016-03-13T09:10:22.242+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56342fbc7bc3500e96202352') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563430eea2a4330ea41e7d4d') } 2016-03-13T09:10:22.242+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:10:22.242+0000-56e52e7e22724ad62c347c29", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860222242), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56342fbc7bc3500e96202352') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563430eea2a4330ea41e7d4d') }, step 1 of 5: 22, step 2 of 5: 3, step 3 of 5: 6613, step 4 of 5: 0, step 5 of 5: 1824, note: "success" } } 2016-03-13T09:10:41.699+0000 I SHARDING [conn143] remotely refreshing metadata for mydomain.Sessions based on current shard version 30551|0||563ba74d869758a4542b9075, current metadata version is 30620|1||563ba74d869758a4542b9075 2016-03-13T09:10:41.746+0000 I SHARDING [conn143] updating metadata for mydomain.Sessions from shard version 30551|0||563ba74d869758a4542b9075 to shard version 30621|0||563ba74d869758a4542b9075 2016-03-13T09:10:41.746+0000 I SHARDING [conn143] collection version was loaded at version 30622|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T09:10:41.748+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343221a2a4330ea41ea076') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343351a2a4330ea41ebb08') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:10:41.750+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343221a2a4330ea41ea076') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343351a2a4330ea41ebb08') }, with opId: 58801420 2016-03-13T09:10:41.750+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343221a2a4330ea41ea076') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343351a2a4330ea41ebb08') } 2016-03-13T09:10:43.947+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:10:43.667+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:10:48.456+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:10:49.457+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:10:49.457+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343221a2a4330ea41ea076') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343351a2a4330ea41ebb08') } 2016-03-13T09:10:49.457+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343221a2a4330ea41ea076') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343351a2a4330ea41ebb08') } 2016-03-13T09:10:50.218+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343221a2a4330ea41ea076') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343351a2a4330ea41ebb08') } 2016-03-13T09:10:50.218+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343221a2a4330ea41ea076') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343351a2a4330ea41ebb08') } 2016-03-13T09:10:50.218+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:10:50.218+0000-56e52e9a22724ad62c347c2a", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860250218), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343221a2a4330ea41ea076') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343351a2a4330ea41ebb08') }, step 1 of 5: 2, step 2 of 5: 3, step 3 of 5: 6701, step 4 of 5: 0, step 5 of 5: 1761, note: "success" } } 2016-03-13T09:11:10.681+0000 I SHARDING [conn143] remotely refreshing metadata for mydomain.Sessions based on current shard version 30621|0||563ba74d869758a4542b9075, current metadata version is 30622|1||563ba74d869758a4542b9075 2016-03-13T09:11:10.729+0000 I SHARDING [conn143] updating metadata for mydomain.Sessions from shard version 30621|0||563ba74d869758a4542b9075 to shard version 30623|0||563ba74d869758a4542b9075 2016-03-13T09:11:10.729+0000 I SHARDING [conn143] collection version was loaded at version 30624|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T09:11:10.731+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563434917bc3500e945e8073') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563435d855418c0ec3f49c0d') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:11:10.732+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563434917bc3500e945e8073') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563435d855418c0ec3f49c0d') }, with opId: 58802675 2016-03-13T09:11:10.732+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563434917bc3500e945e8073') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563435d855418c0ec3f49c0d') } 2016-03-13T09:11:14.142+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:11:13.948+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:11:17.353+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:11:17.353+0000 W SHARDING [migrateThread] migrate commit waiting for a majority of slaves for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563434917bc3500e945e8073') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563435d855418c0ec3f49c0d') } waiting for: (term: 0, timestamp: Mar 13 09:11:17:365) 2016-03-13T09:11:18.353+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:11:18.354+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563434917bc3500e945e8073') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563435d855418c0ec3f49c0d') } 2016-03-13T09:11:18.354+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563434917bc3500e945e8073') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563435d855418c0ec3f49c0d') } 2016-03-13T09:11:19.206+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563434917bc3500e945e8073') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563435d855418c0ec3f49c0d') } 2016-03-13T09:11:19.275+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563434917bc3500e945e8073') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563435d855418c0ec3f49c0d') } 2016-03-13T09:11:19.275+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:11:19.275+0000-56e52eb722724ad62c347c2b", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860279275), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563434917bc3500e945e8073') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563435d855418c0ec3f49c0d') }, step 1 of 5: 0, step 2 of 5: 3, step 3 of 5: 6615, step 4 of 5: 1, step 5 of 5: 1921, note: "success" } } 2016-03-13T09:11:38.072+0000 I SHARDING [conn19] remotely refreshing metadata for mydomain.Sessions based on current shard version 30623|0||563ba74d869758a4542b9075, current metadata version is 30624|1||563ba74d869758a4542b9075 2016-03-13T09:11:38.119+0000 I SHARDING [conn19] updating metadata for mydomain.Sessions from shard version 30623|0||563ba74d869758a4542b9075 to shard version 30625|0||563ba74d869758a4542b9075 2016-03-13T09:11:38.120+0000 I SHARDING [conn19] collection version was loaded at version 30626|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T09:11:38.121+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634371a7bc3500e945eb4e6') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634386555418c0ec5956101') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:11:38.124+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634371a7bc3500e945eb4e6') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634386555418c0ec5956101') }, with opId: 58803812 2016-03-13T09:11:38.124+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634371a7bc3500e945eb4e6') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634386555418c0ec5956101') } 2016-03-13T09:11:44.270+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:11:44.142+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:11:45.012+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:11:46.012+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:11:46.012+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634371a7bc3500e945eb4e6') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634386555418c0ec5956101') } 2016-03-13T09:11:46.013+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634371a7bc3500e945eb4e6') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634386555418c0ec5956101') } 2016-03-13T09:11:46.590+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634371a7bc3500e945eb4e6') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634386555418c0ec5956101') } 2016-03-13T09:11:46.591+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634371a7bc3500e945eb4e6') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634386555418c0ec5956101') } 2016-03-13T09:11:46.591+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:11:46.591+0000-56e52ed222724ad62c347c2c", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860306591), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634371a7bc3500e945eb4e6') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634386555418c0ec5956101') }, step 1 of 5: 2, step 2 of 5: 3, step 3 of 5: 6884, step 4 of 5: 0, step 5 of 5: 1578, note: "success" } } 2016-03-13T09:12:11.270+0000 I SHARDING [conn19] remotely refreshing metadata for mydomain.Sessions based on current shard version 30625|0||563ba74d869758a4542b9075, current metadata version is 30626|1||563ba74d869758a4542b9075 2016-03-13T09:12:11.318+0000 I SHARDING [conn19] updating metadata for mydomain.Sessions from shard version 30625|0||563ba74d869758a4542b9075 to shard version 30627|0||563ba74d869758a4542b9075 2016-03-13T09:12:11.318+0000 I SHARDING [conn19] collection version was loaded at version 30628|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T09:12:11.320+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563439b8584ad10f21336dad') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343b1455418c0ec3f53019') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:12:11.322+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563439b8584ad10f21336dad') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343b1455418c0ec3f53019') }, with opId: 58804997 2016-03-13T09:12:11.322+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563439b8584ad10f21336dad') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343b1455418c0ec3f53019') } 2016-03-13T09:12:14.526+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:12:14.270+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:12:18.342+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:12:19.342+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:12:19.342+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563439b8584ad10f21336dad') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343b1455418c0ec3f53019') } 2016-03-13T09:12:19.342+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563439b8584ad10f21336dad') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343b1455418c0ec3f53019') } 2016-03-13T09:12:19.771+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563439b8584ad10f21336dad') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343b1455418c0ec3f53019') } 2016-03-13T09:12:19.771+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563439b8584ad10f21336dad') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343b1455418c0ec3f53019') } 2016-03-13T09:12:19.771+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:12:19.771+0000-56e52ef322724ad62c347c2d", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860339771), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563439b8584ad10f21336dad') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343b1455418c0ec3f53019') }, step 1 of 5: 2, step 2 of 5: 3, step 3 of 5: 7015, step 4 of 5: 0, step 5 of 5: 1429, note: "success" } } 2016-03-13T09:12:41.172+0000 I SHARDING [conn19] remotely refreshing metadata for mydomain.Sessions based on current shard version 30627|0||563ba74d869758a4542b9075, current metadata version is 30628|1||563ba74d869758a4542b9075 2016-03-13T09:12:41.220+0000 I SHARDING [conn19] updating metadata for mydomain.Sessions from shard version 30627|0||563ba74d869758a4542b9075 to shard version 30629|0||563ba74d869758a4542b9075 2016-03-13T09:12:41.220+0000 I SHARDING [conn19] collection version was loaded at version 30630|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T09:12:41.222+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343c70a2a4330ea293d314') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343de255418c0ec5960271') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:12:41.223+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343c70a2a4330ea293d314') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343de255418c0ec5960271') }, with opId: 58806252 2016-03-13T09:12:41.223+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343c70a2a4330ea293d314') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343de255418c0ec5960271') } 2016-03-13T09:12:44.689+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:12:44.526+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:12:48.526+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:12:49.526+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:12:50.526+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:12:50.527+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343c70a2a4330ea293d314') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343de255418c0ec5960271') } 2016-03-13T09:12:50.527+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343c70a2a4330ea293d314') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343de255418c0ec5960271') } 2016-03-13T09:12:50.693+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343c70a2a4330ea293d314') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343de255418c0ec5960271') } 2016-03-13T09:12:50.693+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343c70a2a4330ea293d314') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343de255418c0ec5960271') } 2016-03-13T09:12:50.693+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:12:50.693+0000-56e52f1222724ad62c347c2e", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860370693), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343c70a2a4330ea293d314') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343de255418c0ec5960271') }, step 1 of 5: 1, step 2 of 5: 3, step 3 of 5: 7298, step 4 of 5: 0, step 5 of 5: 2166, note: "success" } } 2016-03-13T09:13:12.189+0000 I SHARDING [conn19] remotely refreshing metadata for mydomain.Sessions based on current shard version 30629|0||563ba74d869758a4542b9075, current metadata version is 30630|1||563ba74d869758a4542b9075 2016-03-13T09:13:12.236+0000 I SHARDING [conn19] updating metadata for mydomain.Sessions from shard version 30629|0||563ba74d869758a4542b9075 to shard version 30631|0||563ba74d869758a4542b9075 2016-03-13T09:13:12.236+0000 I SHARDING [conn19] collection version was loaded at version 30632|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T09:13:12.238+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343f36584ad10f21340232') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563440a555418c0ec3f5d0e8') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:13:12.239+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343f36584ad10f21340232') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563440a555418c0ec3f5d0e8') }, with opId: 58807555 2016-03-13T09:13:12.240+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343f36584ad10f21340232') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563440a555418c0ec3f5d0e8') } 2016-03-13T09:13:14.825+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:13:14.689+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:13:20.012+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:13:21.013+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:13:21.013+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343f36584ad10f21340232') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563440a555418c0ec3f5d0e8') } 2016-03-13T09:13:21.013+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343f36584ad10f21340232') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563440a555418c0ec3f5d0e8') } 2016-03-13T09:13:21.692+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343f36584ad10f21340232') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563440a555418c0ec3f5d0e8') } 2016-03-13T09:13:21.692+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343f36584ad10f21340232') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563440a555418c0ec3f5d0e8') } 2016-03-13T09:13:21.692+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:13:21.692+0000-56e52f3122724ad62c347c2f", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860401692), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56343f36584ad10f21340232') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563440a555418c0ec3f5d0e8') }, step 1 of 5: 1, step 2 of 5: 3, step 3 of 5: 7768, step 4 of 5: 0, step 5 of 5: 1679, note: "success" } } 2016-03-13T09:13:42.572+0000 I SHARDING [conn19] remotely refreshing metadata for mydomain.Sessions based on current shard version 30631|0||563ba74d869758a4542b9075, current metadata version is 30632|1||563ba74d869758a4542b9075 2016-03-13T09:13:42.620+0000 I SHARDING [conn19] updating metadata for mydomain.Sessions from shard version 30631|0||563ba74d869758a4542b9075 to shard version 30633|0||563ba74d869758a4542b9075 2016-03-13T09:13:42.620+0000 I SHARDING [conn19] collection version was loaded at version 30634|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T09:13:42.622+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344228a2a4330ea2946fcc') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563443b6a2a4330ea29499ae') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:13:42.625+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344228a2a4330ea2946fcc') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563443b6a2a4330ea29499ae') }, with opId: 58808897 2016-03-13T09:13:42.625+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344228a2a4330ea2946fcc') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563443b6a2a4330ea29499ae') } 2016-03-13T09:13:44.955+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:13:44.825+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:13:50.345+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:13:51.345+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:13:51.346+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344228a2a4330ea2946fcc') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563443b6a2a4330ea29499ae') } 2016-03-13T09:13:51.346+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344228a2a4330ea2946fcc') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563443b6a2a4330ea29499ae') } 2016-03-13T09:13:52.089+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344228a2a4330ea2946fcc') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563443b6a2a4330ea29499ae') } 2016-03-13T09:13:52.089+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344228a2a4330ea2946fcc') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563443b6a2a4330ea29499ae') } 2016-03-13T09:13:52.089+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:13:52.089+0000-56e52f5022724ad62c347c30", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860432089), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344228a2a4330ea2946fcc') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563443b6a2a4330ea29499ae') }, step 1 of 5: 2, step 2 of 5: 3, step 3 of 5: 7716, step 4 of 5: 0, step 5 of 5: 1743, note: "success" } } 2016-03-13T09:14:15.077+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:14:14.955+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:14:16.240+0000 I SHARDING [conn19] remotely refreshing metadata for mydomain.Sessions based on current shard version 30633|0||563ba74d869758a4542b9075, current metadata version is 30634|1||563ba74d869758a4542b9075 2016-03-13T09:14:16.288+0000 I SHARDING [conn19] updating metadata for mydomain.Sessions from shard version 30633|0||563ba74d869758a4542b9075 to shard version 30635|0||563ba74d869758a4542b9075 2016-03-13T09:14:16.288+0000 I SHARDING [conn19] collection version was loaded at version 30636|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T09:14:16.290+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634454855418c0ec3f64c1c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563446f855418c0ec3f67ffc') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:14:16.303+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634454855418c0ec3f64c1c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563446f855418c0ec3f67ffc') }, with opId: 58810280 2016-03-13T09:14:16.303+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634454855418c0ec3f64c1c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563446f855418c0ec3f67ffc') } 2016-03-13T09:14:22.926+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:14:23.926+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:14:23.926+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634454855418c0ec3f64c1c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563446f855418c0ec3f67ffc') } 2016-03-13T09:14:23.926+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634454855418c0ec3f64c1c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563446f855418c0ec3f67ffc') } 2016-03-13T09:14:24.722+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634454855418c0ec3f64c1c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563446f855418c0ec3f67ffc') } 2016-03-13T09:14:24.722+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634454855418c0ec3f64c1c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563446f855418c0ec3f67ffc') } 2016-03-13T09:14:24.722+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:14:24.722+0000-56e52f7022724ad62c347c31", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860464722), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634454855418c0ec3f64c1c') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563446f855418c0ec3f67ffc') }, step 1 of 5: 13, step 2 of 5: 3, step 3 of 5: 6616, step 4 of 5: 1, step 5 of 5: 1796, note: "success" } } 2016-03-13T09:14:43.180+0000 I SHARDING [conn143] remotely refreshing metadata for mydomain.Sessions based on current shard version 30635|0||563ba74d869758a4542b9075, current metadata version is 30636|1||563ba74d869758a4542b9075 2016-03-13T09:14:43.228+0000 I SHARDING [conn143] updating metadata for mydomain.Sessions from shard version 30635|0||563ba74d869758a4542b9075 to shard version 30637|0||563ba74d869758a4542b9075 2016-03-13T09:14:43.228+0000 I SHARDING [conn143] collection version was loaded at version 30638|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T09:14:43.230+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563448a3a2a4330ea420ec84') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344a6a584ad10f2135352d') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:14:43.231+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563448a3a2a4330ea420ec84') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344a6a584ad10f2135352d') }, with opId: 58811442 2016-03-13T09:14:43.231+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563448a3a2a4330ea420ec84') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344a6a584ad10f2135352d') } 2016-03-13T09:14:45.361+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:14:45.078+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:14:49.611+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:14:50.611+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:14:50.611+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563448a3a2a4330ea420ec84') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344a6a584ad10f2135352d') } 2016-03-13T09:14:50.611+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563448a3a2a4330ea420ec84') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344a6a584ad10f2135352d') } 2016-03-13T09:14:51.694+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563448a3a2a4330ea420ec84') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344a6a584ad10f2135352d') } 2016-03-13T09:14:51.694+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563448a3a2a4330ea420ec84') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344a6a584ad10f2135352d') } 2016-03-13T09:14:51.694+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:14:51.694+0000-56e52f8b22724ad62c347c32", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860491694), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('563448a3a2a4330ea420ec84') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344a6a584ad10f2135352d') }, step 1 of 5: 1, step 2 of 5: 3, step 3 of 5: 6375, step 4 of 5: 0, step 5 of 5: 2083, note: "success" } } 2016-03-13T09:15:11.020+0000 I SHARDING [conn19] remotely refreshing metadata for mydomain.Sessions based on current shard version 30637|0||563ba74d869758a4542b9075, current metadata version is 30638|1||563ba74d869758a4542b9075 2016-03-13T09:15:11.067+0000 I SHARDING [conn19] updating metadata for mydomain.Sessions from shard version 30637|0||563ba74d869758a4542b9075 to shard version 30639|0||563ba74d869758a4542b9075 2016-03-13T09:15:11.068+0000 I SHARDING [conn19] collection version was loaded at version 30640|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T09:15:11.069+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344c42a2a4330ea29576e8') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344e1255418c0ec3f745c2') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:15:11.071+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344c42a2a4330ea29576e8') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344e1255418c0ec3f745c2') }, with opId: 58812558 2016-03-13T09:15:11.071+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344c42a2a4330ea29576e8') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344e1255418c0ec3f745c2') } 2016-03-13T09:15:15.660+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:15:15.361+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:15:17.680+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:15:18.680+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:15:18.680+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344c42a2a4330ea29576e8') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344e1255418c0ec3f745c2') } 2016-03-13T09:15:18.680+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344c42a2a4330ea29576e8') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344e1255418c0ec3f745c2') } 2016-03-13T09:15:19.558+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344c42a2a4330ea29576e8') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344e1255418c0ec3f745c2') } 2016-03-13T09:15:19.559+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344c42a2a4330ea29576e8') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344e1255418c0ec3f745c2') } 2016-03-13T09:15:19.559+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:15:19.559+0000-56e52fa722724ad62c347c33", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860519559), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344c42a2a4330ea29576e8') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56344e1255418c0ec3f745c2') }, step 1 of 5: 1, step 2 of 5: 3, step 3 of 5: 6605, step 4 of 5: 0, step 5 of 5: 1878, note: "success" } } 2016-03-13T09:15:40.856+0000 I SHARDING [conn19] remotely refreshing metadata for mydomain.Sessions based on current shard version 30639|0||563ba74d869758a4542b9075, current metadata version is 30640|1||563ba74d869758a4542b9075 2016-03-13T09:15:40.904+0000 I SHARDING [conn19] updating metadata for mydomain.Sessions from shard version 30639|0||563ba74d869758a4542b9075 to shard version 30641|0||563ba74d869758a4542b9075 2016-03-13T09:15:40.904+0000 I SHARDING [conn19] collection version was loaded at version 30642|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T09:15:40.906+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634500555418c0ec3f7765c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634520d584ad10f21360d6d') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:15:40.907+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634500555418c0ec3f7765c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634520d584ad10f21360d6d') }, with opId: 58813748 2016-03-13T09:15:40.907+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634500555418c0ec3f7765c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634520d584ad10f21360d6d') } 2016-03-13T09:15:45.960+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:15:45.660+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:15:48.322+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:15:49.322+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:15:49.322+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634500555418c0ec3f7765c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634520d584ad10f21360d6d') } 2016-03-13T09:15:49.322+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634500555418c0ec3f7765c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634520d584ad10f21360d6d') } 2016-03-13T09:15:49.899+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634500555418c0ec3f7765c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634520d584ad10f21360d6d') } 2016-03-13T09:15:49.899+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634500555418c0ec3f7765c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634520d584ad10f21360d6d') } 2016-03-13T09:15:49.899+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:15:49.899+0000-56e52fc522724ad62c347c34", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860549899), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634500555418c0ec3f7765c') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634520d584ad10f21360d6d') }, step 1 of 5: 0, step 2 of 5: 3, step 3 of 5: 7410, step 4 of 5: 0, step 5 of 5: 1577, note: "success" } } 2016-03-13T09:16:13.038+0000 I SHARDING [conn143] remotely refreshing metadata for mydomain.Sessions based on current shard version 30641|0||563ba74d869758a4542b9075, current metadata version is 30642|1||563ba74d869758a4542b9075 2016-03-13T09:16:13.086+0000 I SHARDING [conn143] updating metadata for mydomain.Sessions from shard version 30641|0||563ba74d869758a4542b9075 to shard version 30643|0||563ba74d869758a4542b9075 2016-03-13T09:16:13.086+0000 I SHARDING [conn143] collection version was loaded at version 30644|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T09:16:13.088+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345421584ad10f1f990b05') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634564b55418c0ec3f816f0') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:16:13.089+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345421584ad10f1f990b05') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634564b55418c0ec3f816f0') }, with opId: 58815206 2016-03-13T09:16:13.089+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345421584ad10f1f990b05') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634564b55418c0ec3f816f0') } 2016-03-13T09:16:16.104+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:16:15.960+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:16:20.033+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:16:21.033+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:16:21.034+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345421584ad10f1f990b05') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634564b55418c0ec3f816f0') } 2016-03-13T09:16:21.034+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345421584ad10f1f990b05') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634564b55418c0ec3f816f0') } 2016-03-13T09:16:21.517+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345421584ad10f1f990b05') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634564b55418c0ec3f816f0') } 2016-03-13T09:16:21.517+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345421584ad10f1f990b05') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634564b55418c0ec3f816f0') } 2016-03-13T09:16:21.517+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:16:21.517+0000-56e52fe522724ad62c347c35", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860581517), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345421584ad10f1f990b05') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634564b55418c0ec3f816f0') }, step 1 of 5: 0, step 2 of 5: 3, step 3 of 5: 6940, step 4 of 5: 0, step 5 of 5: 1483, note: "success" } } 2016-03-13T09:16:41.973+0000 I SHARDING [conn143] remotely refreshing metadata for mydomain.Sessions based on current shard version 30643|0||563ba74d869758a4542b9075, current metadata version is 30644|1||563ba74d869758a4542b9075 2016-03-13T09:16:42.021+0000 I SHARDING [conn143] updating metadata for mydomain.Sessions from shard version 30643|0||563ba74d869758a4542b9075 to shard version 30645|0||563ba74d869758a4542b9075 2016-03-13T09:16:42.021+0000 I SHARDING [conn143] collection version was loaded at version 30646|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T09:16:42.023+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634589655418c0ec3f850a4') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345b0e55418c0ec5990cac') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:16:42.024+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634589655418c0ec3f850a4') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345b0e55418c0ec5990cac') }, with opId: 58816361 2016-03-13T09:16:42.024+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634589655418c0ec3f850a4') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345b0e55418c0ec5990cac') } 2016-03-13T09:16:46.265+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:16:46.105+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:16:48.505+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:16:49.506+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:16:49.506+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634589655418c0ec3f850a4') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345b0e55418c0ec5990cac') } 2016-03-13T09:16:49.506+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634589655418c0ec3f850a4') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345b0e55418c0ec5990cac') } 2016-03-13T09:16:50.520+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634589655418c0ec3f850a4') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345b0e55418c0ec5990cac') } 2016-03-13T09:16:50.520+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634589655418c0ec3f850a4') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345b0e55418c0ec5990cac') } 2016-03-13T09:16:50.520+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:16:50.520+0000-56e5300222724ad62c347c36", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860610520), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634589655418c0ec3f850a4') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345b0e55418c0ec5990cac') }, step 1 of 5: 0, step 2 of 5: 3, step 3 of 5: 6473, step 4 of 5: 3, step 5 of 5: 2014, note: "success" } } 2016-03-13T09:17:13.531+0000 I SHARDING [conn143] remotely refreshing metadata for mydomain.Sessions based on current shard version 30645|0||563ba74d869758a4542b9075, current metadata version is 30646|1||563ba74d869758a4542b9075 2016-03-13T09:17:13.579+0000 I SHARDING [conn143] updating metadata for mydomain.Sessions from shard version 30645|0||563ba74d869758a4542b9075 to shard version 30647|0||563ba74d869758a4542b9075 2016-03-13T09:17:13.579+0000 I SHARDING [conn143] collection version was loaded at version 30648|1||563ba74d869758a4542b9075, took 47ms 2016-03-13T09:17:13.581+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345d9e584ad10f1f9a10c6') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634605ca2a4330ea423620f') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:17:13.582+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345d9e584ad10f1f9a10c6') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634605ca2a4330ea423620f') }, with opId: 58817622 2016-03-13T09:17:13.582+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345d9e584ad10f1f9a10c6') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634605ca2a4330ea423620f') } 2016-03-13T09:17:16.433+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:17:16.266+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:17:20.233+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:17:21.233+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:17:21.233+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345d9e584ad10f1f9a10c6') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634605ca2a4330ea423620f') } 2016-03-13T09:17:21.233+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345d9e584ad10f1f9a10c6') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634605ca2a4330ea423620f') } 2016-03-13T09:17:22.036+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345d9e584ad10f1f9a10c6') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634605ca2a4330ea423620f') } 2016-03-13T09:17:22.037+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345d9e584ad10f1f9a10c6') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634605ca2a4330ea423620f') } 2016-03-13T09:17:22.037+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:17:22.037+0000-56e5302222724ad62c347c37", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860642037), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56345d9e584ad10f1f9a10c6') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634605ca2a4330ea423620f') }, step 1 of 5: 0, step 2 of 5: 3, step 3 of 5: 6646, step 4 of 5: 0, step 5 of 5: 1804, note: "success" } } 2016-03-13T09:17:31.935+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.121:47097 #791 (345 connections now open) 2016-03-13T09:17:32.076+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50287 #792 (346 connections now open) 2016-03-13T09:17:32.078+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50288 #793 (347 connections now open) 2016-03-13T09:17:32.099+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50289 #794 (348 connections now open) 2016-03-13T09:17:32.124+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.121:47101 #795 (349 connections now open) 2016-03-13T09:17:32.333+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.121:47104 #796 (350 connections now open) 2016-03-13T09:17:38.369+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.121:47111 #797 (351 connections now open) 2016-03-13T09:17:38.370+0000 I SHARDING [conn781] request split points lookup for chunk mydomain.DeviceCodeInternalReports { : ObjectId('56ccb74dc478a16593a157c1'), : new Date(1456185600000), : MinKey, : MinKey } -->> { : MaxKey, : MaxKey, : MinKey, : MinKey } 2016-03-13T09:17:38.370+0000 I SHARDING [conn781] limiting split vector to 250000 (from 291777) objects 2016-03-13T09:17:38.370+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.121:47112 #798 (352 connections now open) 2016-03-13T09:17:38.484+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50300 #799 (353 connections now open) 2016-03-13T09:17:38.487+0000 I SHARDING [conn799] request split points lookup for chunk mydomain.DeviceCodeInternalReports { : ObjectId('56ccb74dc478a16593a157c1'), : new Date(1456185600000), : MinKey, : MinKey } -->> { : MaxKey, : MaxKey, : MinKey, : MinKey } 2016-03-13T09:17:38.487+0000 I SHARDING [conn799] limiting split vector to 250000 (from 291777) objects 2016-03-13T09:17:38.576+0000 I SHARDING [conn794] remotely refreshing metadata for mydomain.Sessions with requested shard version 30649|0||563ba74d869758a4542b9075 based on current shard version 30647|0||563ba74d869758a4542b9075, current metadata version is 30648|1||563ba74d869758a4542b9075 2016-03-13T09:17:38.637+0000 I SHARDING [conn794] updating metadata for mydomain.Sessions from shard version 30647|0||563ba74d869758a4542b9075 to shard version 30649|0||563ba74d869758a4542b9075 2016-03-13T09:17:38.638+0000 I SHARDING [conn794] collection version was loaded at version 30650|1||563ba74d869758a4542b9075, took 61ms 2016-03-13T09:17:46.705+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:17:46.433+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:17:50.832+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50308 #800 (354 connections now open) 2016-03-13T09:17:50.975+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.121:47117 #801 (355 connections now open) 2016-03-13T09:17:50.975+0000 I SHARDING [conn798] request split points lookup for chunk mydomain.DeviceCodeInternalReports { : ObjectId('56ccb74dc478a16593a157c1'), : new Date(1456185600000), : MinKey, : MinKey } -->> { : MaxKey, : MaxKey, : MinKey, : MinKey } 2016-03-13T09:17:50.975+0000 I SHARDING [conn798] limiting split vector to 250000 (from 291777) objects 2016-03-13T09:17:50.979+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.121:47121 #802 (356 connections now open) 2016-03-13T09:17:50.984+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.121:47131 #803 (357 connections now open) 2016-03-13T09:17:50.985+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.121:47133 #804 (358 connections now open) 2016-03-13T09:17:50.990+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50310 #805 (359 connections now open) 2016-03-13T09:17:51.002+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.121:47138 #806 (360 connections now open) 2016-03-13T09:17:57.106+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50325 #807 (361 connections now open) 2016-03-13T09:17:57.109+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50328 #808 (362 connections now open) 2016-03-13T09:18:02.574+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50331 #809 (363 connections now open) 2016-03-13T09:18:02.632+0000 I SHARDING [conn799] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('55edaeef3d6cf9f7d5bed209'), : new Date(1440460800000), : MinKey } -->> { : ObjectId('562b6b60bf86ce4ffd2009eb'), : new Date(1445990400000), : MinKey } 2016-03-13T09:18:02.958+0000 I SHARDING [conn143] remotely refreshing metadata for mydomain.Sessions based on current shard version 30649|0||563ba74d869758a4542b9075, current metadata version is 30650|1||563ba74d869758a4542b9075 2016-03-13T09:18:03.007+0000 I SHARDING [conn143] metadata of collection mydomain.Sessions already up to date (shard version : 30649|0||563ba74d869758a4542b9075, took 48ms) 2016-03-13T09:18:03.010+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346357584ad10f1f9aaa8c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634669d55418c0ec3f9ca7d') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:18:03.011+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346357584ad10f1f9aaa8c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634669d55418c0ec3f9ca7d') }, with opId: 58819470 2016-03-13T09:18:03.012+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346357584ad10f1f9aaa8c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634669d55418c0ec3f9ca7d') } 2016-03-13T09:18:09.365+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:18:09.646+0000 I SHARDING [conn799] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:10.070+0000 I SHARDING [conn798] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:10.072+0000 I SHARDING [conn780] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:10.366+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:18:10.366+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346357584ad10f1f9aaa8c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634669d55418c0ec3f9ca7d') } 2016-03-13T09:18:10.366+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346357584ad10f1f9aaa8c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634669d55418c0ec3f9ca7d') } 2016-03-13T09:18:10.829+0000 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346357584ad10f1f9aaa8c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634669d55418c0ec3f9ca7d') } 2016-03-13T09:18:10.829+0000 I SHARDING [migrateThread] migrate commit flushed to journal for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346357584ad10f1f9aaa8c') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634669d55418c0ec3f9ca7d') } 2016-03-13T09:18:10.829+0000 I SHARDING [migrateThread] about to log metadata event into changelog: { _id: "ip-10-0-167-74-2016-03-13T09:18:10.829+0000-56e5305222724ad62c347c50", server: "ip-10-0-167-74", clientAddr: "", time: new Date(1457860690829), what: "moveChunk.to", ns: "mydomain.Sessions", details: { min: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346357584ad10f1f9aaa8c') }, max: { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('5634669d55418c0ec3f9ca7d') }, step 1 of 5: 1, step 2 of 5: 4, step 3 of 5: 6349, step 4 of 5: 0, step 5 of 5: 1463, note: "success" } } 2016-03-13T09:18:16.584+0000 I SHARDING [conn798] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:16.587+0000 I SHARDING [conn781] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:16.589+0000 I SHARDING [conn780] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:16.590+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.121:47147 #810 (364 connections now open) 2016-03-13T09:18:16.594+0000 I SHARDING [conn810] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:16.597+0000 I SHARDING [conn797] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:16.598+0000 I SHARDING [conn799] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:16.605+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50349 #811 (365 connections now open) 2016-03-13T09:18:16.606+0000 I SHARDING [conn811] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:16.606+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50350 #812 (366 connections now open) 2016-03-13T09:18:16.607+0000 I SHARDING [conn812] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:16.728+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50351 #813 (367 connections now open) 2016-03-13T09:18:16.731+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50352 #814 (368 connections now open) 2016-03-13T09:18:17.421+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:18:16.705+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:18:22.584+0000 I SHARDING [conn798] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:22.586+0000 I SHARDING [conn797] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:22.586+0000 I SHARDING [conn810] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:22.588+0000 I SHARDING [conn781] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:22.595+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.121:47163 #815 (369 connections now open) 2016-03-13T09:18:22.596+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.121:47164 #816 (370 connections now open) 2016-03-13T09:18:28.521+0000 I SHARDING [conn780] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:28.521+0000 I SHARDING [conn812] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:28.522+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50386 #817 (371 connections now open) 2016-03-13T09:18:28.522+0000 I SHARDING [conn811] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:28.523+0000 I SHARDING [conn813] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:28.523+0000 I SHARDING [conn799] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:28.524+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50389 #818 (372 connections now open) 2016-03-13T09:18:28.528+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50395 #819 (373 connections now open) 2016-03-13T09:18:29.293+0000 I SHARDING [conn781] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:29.296+0000 I SHARDING [conn797] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:29.300+0000 I SHARDING [conn815] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:29.301+0000 I SHARDING [conn810] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:29.302+0000 I SHARDING [conn798] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:36.724+0000 I SHARDING [conn811] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:36.724+0000 I SHARDING [conn780] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:36.727+0000 I SHARDING [conn799] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:36.729+0000 I SHARDING [conn813] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:36.731+0000 I SHARDING [conn812] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:36.737+0000 I SHARDING [conn781] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:36.739+0000 I SHARDING [conn815] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:36.740+0000 I SHARDING [conn797] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:36.742+0000 I NETWORK [initandlisten] connection accepted from 10.0.231.120:50407 #820 (374 connections now open) 2016-03-13T09:18:36.760+0000 I SHARDING [conn798] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:36.760+0000 I SHARDING [conn810] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:47.608+0000 I SHARDING [LockPinger] cluster in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019 pinged successfully at 2016-03-13T09:18:47.421+0000 by distributed lock pinger 'in.dbcfg1.mongo323.test.mydomain.com:27019,in.dbcfg2.mongo323.test.mydomain.com:27019,in.dbcfg3.mongo323.test.mydomain.com:27019/ip-10-0-167-74:27017:1457539547:978884897', sleeping for 30000ms 2016-03-13T09:18:49.025+0000 I SHARDING [conn797] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:49.028+0000 I SHARDING [conn815] request split points lookup for chunk mydomain.AppVersionReports { : ObjectId('56604d905e27753f5c59e037'), : new Date(1449705600000), : MinKey } -->> { : MaxKey, : MaxKey, : MinKey } 2016-03-13T09:18:50.106+0000 I SHARDING [conn19] remotely refreshing metadata for mydomain.Sessions based on current shard version 30649|0||563ba74d869758a4542b9075, current metadata version is 30650|1||563ba74d869758a4542b9075 2016-03-13T09:18:50.207+0000 I SHARDING [conn19] updating metadata for mydomain.Sessions from shard version 30649|0||563ba74d869758a4542b9075 to shard version 30651|0||563ba74d869758a4542b9075 2016-03-13T09:18:50.207+0000 I SHARDING [conn19] collection version was loaded at version 30652|1||563ba74d869758a4542b9075, took 100ms 2016-03-13T09:18:50.209+0000 I SHARDING [migrateThread] starting receiving-end of migration of chunk { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346a2a55418c0ec59aacce') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346e2d55418c0ec59b2587') } for collection mydomain.Sessions from rsmydomain/in.db1m1.mongo323.test.mydomain.com:27017,in.db1m2.mongo323.test.mydomain.com:27017 at epoch 563ba74d869758a4542b9075 2016-03-13T09:18:50.210+0000 I SHARDING [migrateThread] Deleter starting delete for: mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346a2a55418c0ec59aacce') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346e2d55418c0ec59b2587') }, with opId: 58822316 2016-03-13T09:18:50.210+0000 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for mydomain.Sessions from { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346a2a55418c0ec59aacce') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346e2d55418c0ec59b2587') } 2016-03-13T09:18:57.270+0000 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section 2016-03-13T09:18:57.270+0000 W SHARDING [migrateThread] migrate commit waiting for a majority of slaves for 'mydomain.Sessions' { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346a2a55418c0ec59aacce') } -> { a: ObjectId('533efb4e645cff1f6c911e5a'), _id: ObjectId('56346e2d55418c0ec59b2587') } waiting for: (term: 0, timestamp: Mar 13 09:18:57:d8) ^C [ec2-user@ip-10-0-167-74 ~]$