[SERVER-22615] Assertion: 10334:BSONObj size: 0 (0x00000000) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO when doing new replicaset member initialsync Created: 15/Feb/16  Updated: 06/May/16  Resolved: 19/Feb/16

Status: Closed
Project: Core Server
Component/s: Storage
Affects Version/s: 2.4.12
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Metikov Vadim Assignee: Unassigned
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Duplicate
is duplicated by SERVER-22613 Is there a bug? Assertion: 10334:BSON... Closed
is duplicated by SERVER-22614 CLONE - Is there a bug? Assertion: 10... Closed
Operating System: ALL
Steps To Reproduce:

Primary running. Add new secondary. Start new secondary and primary crashed with logs below.

Participants:

 Description   

We use sharded cluster of MongoDB.
When i start initialSync of new member os first shard i get an crash and error on primary:
Assertion: 10334:BSONObj size: 0 (0x00000000) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO
And new secondary get status "SECONDARY" with 12 datafiles instead of 3000+.
What i need to do next?

Mon Feb 15 07:00:37.148 [repl writer worker 1] Assertion: 10334:BSONObj size: 0 (0x00000000) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO
0xdea5b1 0xdabd7b 0xdac2bc 0x6ee71f 0x6fb76c 0xacb1cb 0xa96d38 0xa9a6bc 0xa9c8d7 0xa756e8 0xc32823 0xc31f68 0xdb7f81 0xe33239 0x7fabc3a5ab50 0x7fabc2dfda7d
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdea5b1]
/usr/bin/mongod(_ZN5mongo11msgassertedEiPKc+0x9b) [0xdabd7b]
/usr/bin/mongod() [0xdac2bc]
/usr/bin/mongod(_ZNK5mongo7BSONObj14_assertInvalidEv+0x5bf) [0x6ee71f]
/usr/bin/mongod(_ZN5mongo7BSONObjC1EPKc+0x3c) [0x6fb76c]
/usr/bin/mongod(_ZN5mongo11DataFileMgr12updateRecordEPKcPNS_16NamespaceDetailsEPNS_25NamespaceDetailsTransientEPNS_6RecordERKNS_7DiskLocES2_iRNS_7OpDebugEb+0x6b) [0xacb1cb]
/usr/bin/mongod() [0xa96d38]
/usr/bin/mongod(_ZN5mongo14_updateObjectsEbPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEPNS_11RemoveSaverEbRKNS_24QueryPlanSelectionPolicyEb+0x2d1c) [0xa9a6bc]
/usr/bin/mongod(_ZN5mongo27updateObjectsForReplicationEPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEbRKNS_24QueryPlanSelectionPolicyE+0xb7) [0xa9c8d7]
/usr/bin/mongod(_ZN5mongo21applyOperation_inlockERKNS_7BSONObjEbb+0x628) [0xa756e8]
/usr/bin/mongod(_ZN5mongo7replset8SyncTail9syncApplyERKNS_7BSONObjEb+0x713) [0xc32823]
/usr/bin/mongod(_ZN5mongo7replset14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE+0x48) [0xc31f68]
/usr/bin/mongod(_ZN5mongo10threadpool6Worker4loopEv+0x281) [0xdb7f81]
/usr/bin/mongod() [0xe33239]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7fabc3a5ab50]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fabc2dfda7d]
Mon Feb 15 07:00:37.208 [repl writer worker 1] ERROR: writer worker caught exception: BSONObj size: 0 (0x00000000) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO
on: { ts: Timestamp 1455501634000|47, h: 4868822648138918134, v: 2, op: "i", ns: "grid_fs.fs.chunks", fromMigrate: true, o:
{ _id: ObjectId('51bc7682d5d30701318599d0'), n: 0, data: BinData , files_id: ObjectId('51bc71a5f7c07966503f15f8') }
}
Mon Feb 15 07:00:37.208 [repl writer worker 1] Fatal Assertion 16360
0xdea5b1 0xdaa433 0xc3204c 0xdb7f81 0xe33239 0x7fabc3a5ab50 0x7fabc2dfda7d
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdea5b1]
/usr/bin/mongod(_ZN5mongo13fassertFailedEi+0xa3) [0xdaa433]
/usr/bin/mongod(_ZN5mongo7replset14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE+0x12c) [0xc3204c]
/usr/bin/mongod(_ZN5mongo10threadpool6Worker4loopEv+0x281) [0xdb7f81]
/usr/bin/mongod() [0xe33239]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7fabc3a5ab50]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fabc2dfda7d]
Mon Feb 15 07:00:37.210 [repl writer worker 1]
***aborting after fassert() failure
Mon Feb 15 07:00:37.210 Got signal: 6 (Aborted).
Mon Feb 15 07:00:37.213 Backtrace:
0xdea5b1 0x6d0f29 0x7fabc2d554f0 0x7fabc2d55475 0x7fabc2d586f0 0xdaa46e 0xc3204c 0xdb7f81 0xe33239 0x7fabc3a5ab50 0x7fabc2dfda7d
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdea5b1]
/usr/bin/mongod(_ZN5mongo10abruptQuitEi+0x399) [0x6d0f29]
/lib/x86_64-linux-gnu/libc.so.6(+0x324f0) [0x7fabc2d554f0]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x7fabc2d55475]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x180) [0x7fabc2d586f0]
/usr/bin/mongod(_ZN5mongo13fassertFailedEi+0xde) [0xdaa46e]
/usr/bin/mongod(_ZN5mongo7replset14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE+0x12c) [0xc3204c]
/usr/bin/mongod(_ZN5mongo10threadpool6Worker4loopEv+0x281) [0xdb7f81]
/usr/bin/mongod() [0xe33239]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7fabc3a5ab50]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fabc2dfda7d]



 Comments   
Comment by Ramon Fernandez Marina [ 19/Feb/16 ]

metikovvadim, the repeated "Assertion: 10334" error messages indicate data corruption. Here are some of the things you can do next if none of your secondaries contain a healthy copy of the data and you have no backups to recover from:

Lastly, please be aware that the SERVER project is for reporting bugs or feature suggestions for the MongoDB server. For MongoDB-related support discussion please post on the mongodb-user group, which is the appropriate forum for this sort of issues and where your question will reach a larger audience.

Regards,
Ramón.

Comment by Metikov Vadim [ 19/Feb/16 ]

Today i started initialsync of third node again with
verbose = true
diaglog = 7
and get such log about syncing grid_fs database on new secondary:

2016-02-19T11:02:46.131+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs.11, filling with zeroes...
2016-02-19T11:02:46.154+0500 [FileAllocator] flushing directory /var/lib/mongodb
2016-02-19T11:02:46.160+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs.11, size: 2047MB,  took 0.028 secs
2016-02-19T11:02:46.160+0500 [rsSync] ExtentManager::increaseStorageSize ns:grid_fs.fs.chunks desiredSize:2146426864 fromFreeList: 0 eloc: 11:2000
2016-02-19T11:02:46.613+0500 [rsSync] grid_fs.fs.chunks: clearing collection plan cache - 1000 write operations detected since last refresh.
2016-02-19T11:02:47.356+0500 [ConnectBG] BackgroundJob starting: ConnectBG
2016-02-19T11:02:53.852+0500 [rsSync] grid_fs.fs.chunks: clearing collection plan cache - 1000 write operations detected since last refresh.
2016-02-19T11:02:57.907+0500 [rsSync] grid_fs.fs.chunks: clearing collection plan cache - 1000 write operations detected since last refresh.
2016-02-19T11:02:59.032+0500 [journal] flushing directory /var/lib/mongodb/journal
2016-02-19T11:03:02.464+0500 [DataFileSync] flushing mmaps took 3215ms  for 59 files
2016-02-19T11:03:02.464+0500 [DataFileSync] flushing diag log
2016-02-19T11:03:02.694+0500 [journal] lsn set 223784
2016-02-19T11:03:03.236+0500 [rsSync] 208566 objects cloned so far from collection grid_fs.fs.chunks
2016-02-19T11:03:04.801+0500 [rsSync] grid_fs.fs.chunks: clearing collection plan cache - 1000 write operations detected since last refresh.
2016-02-19T11:03:05.295+0500 [rsSync] clone grid_fs.fs.chunks 209151
2016-02-19T11:03:06.636+0500 [ConnectBG] BackgroundJob starting: ConnectBG
2016-02-19T11:03:07.663+0500 [rsSync] allocating new extent
2016-02-19T11:03:07.663+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs.12, filling with zeroes...
2016-02-19T11:03:07.682+0500 [FileAllocator] flushing directory /var/lib/mongodb
2016-02-19T11:03:07.688+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs.12, size: 2047MB,  took 0.024 secs
2016-02-19T11:03:07.688+0500 [rsSync] ExtentManager::increaseStorageSize ns:grid_fs.fs.chunks desiredSize:2146426864 fromFreeList: 0 eloc: 12:2000
2016-02-19T11:03:08.169+0500 [rsSync] grid_fs.fs.chunks: clearing collection plan cache - 1000 write operations detected since last refresh.
2016-02-19T11:03:11.204+0500 [rsSync] Database::_addNamespaceToCatalog ns: grid_fs.system.indexes
2016-02-19T11:03:11.204+0500 [rsSync] ExtentManager::increaseStorageSize ns:grid_fs.system.indexes desiredSize:8192 fromFreeList: 0 eloc: 8:6ef73000
2016-02-19T11:03:11.204+0500 [rsSync] Database::_addNamespaceToCatalog ns: grid_fs.fs.chunks.$_id_
2016-02-19T11:03:11.204+0500 [rsSync] build index on: grid_fs.fs.chunks properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "grid_fs.fs.chunks" }
2016-02-19T11:03:11.204+0500 [rsSync] grid_fs.fs.chunks: clearing plan cache - collection info cache reset
2016-02-19T11:03:11.205+0500 [rsSync] allocating new extent
2016-02-19T11:03:11.205+0500 [rsSync] ExtentManager::increaseStorageSize ns:grid_fs.fs.chunks.$_id_ desiredSize:131072 fromFreeList: 0 eloc: 8:6ef75000
2016-02-19T11:03:11.205+0500 [rsSync]    building index using bulk method
2016-02-19T11:03:11.909+0500 [rsSync]    bulk commit starting
2016-02-19T11:03:12.455+0500 [rsSync] allocating new extent
2016-02-19T11:03:12.455+0500 [rsSync] ExtentManager::increaseStorageSize ns:grid_fs.fs.chunks.$_id_ desiredSize:524288 fromFreeList: 0 eloc: 8:6ef95000
2016-02-19T11:03:12.464+0500 [rsSync] allocating new extent
2016-02-19T11:03:12.464+0500 [rsSync] ExtentManager::increaseStorageSize ns:grid_fs.fs.chunks.$_id_ desiredSize:2097152 fromFreeList: 0 eloc: 8:6f015000
2016-02-19T11:03:12.499+0500 [rsSync] allocating new extent
2016-02-19T11:03:12.499+0500 [rsSync] ExtentManager::increaseStorageSize ns:grid_fs.fs.chunks.$_id_ desiredSize:8388608 fromFreeList: 0 eloc: 8:6f215000
2016-02-19T11:03:12.558+0500 [rsSync]    done building bottom layer, going to commit
2016-02-19T11:03:12.558+0500 [rsSync] build index done.  scanned 210734 total records. 1.353 secs
2016-02-19T11:03:12.558+0500 [rsSync] grid_fs.fs.chunks: clearing plan cache - collection info cache reset
2016-02-19T11:03:12.587+0500 [rsSync] grid_fs.fs.chunks: clearing plan cache - collection info cache reset
2016-02-19T11:03:12.587+0500 [rsSync] replSet initial sync cloning db: admin
2016-02-19T11:03:12.616+0500 [rsSync] replSet initial sync data copy, starting syncup
2016-02-19T11:03:12.661+0500 [rsBackgroundSync] replset bgsync fetch queue set to: 56c6af2f:8 -2672535017351615784
2016-02-19T11:03:12.661+0500 [rsBackgroundSync] replSet syncing to: 79.110.251.106:27017
2016-02-19T11:03:12.662+0500 [ConnectBG] BackgroundJob starting: ConnectBG
2016-02-19T11:03:12.690+0500 [rsBackgroundSync] replset setting syncSourceFeedback to 79.110.251.106:27017
2016-02-19T11:03:12.690+0500 [ConnectBG] BackgroundJob starting: ConnectBG
2016-02-19T11:03:12.691+0500 [rsBackgroundSync] detecting upstream updater
2016-02-19T11:03:12.691+0500 [rsBackgroundSync] upstream updater is supported
2016-02-19T11:03:12.788+0500 [rsSync] oplog sync 1 of 3

Maybe i need to reindex working PRIMARY? There was huge deletions.

Comment by Metikov Vadim [ 18/Feb/16 ]

Now, with version 2.6.11 Primary not crashed.
But i cant make new secondary via Initial Sync. And chunc migration from second shard("unistore-2") not running.

Comment by Metikov Vadim [ 18/Feb/16 ]

2016-02-19T01:27:52.036+0500 [migrateThread] scoped connection to unistore-2/79.110.251.108:27017,79.172.49.114:27017 not being returned to the pool
2016-02-19T01:27:52.036+0500 [migrateThread] about to log metadata event: { _id: "spain-2016-02-18T20:27:52-56c62948ec9c57d52f9eefef", server: "spain", clientAddr: ":27017", time: new Date(1455827272036), what: "moveChunk.to", ns: "grid_fs.fs.chunks", details: { min: { files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, max: { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }, step 1 of 5: 38, note: "aborted" } }
2016-02-19T01:27:52.487+0500 [migrateThread] ERROR: migrate failed: BSONObj size: 0 (0x0) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO
2016-02-19T01:27:52.487+0500 [migrateThread] warning: cannot remove pending chunk [{ files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }), this shard does not contain the chunk
2016-02-19T01:27:52.487+0500 [migrateThread] warning: cannot remove pending chunk [{ files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }), this shard does not contain the chunk
2016-02-19T01:27:56.383+0500 [conn178] query grid_fs.fs.chunks query: { n: 4, files_id: ObjectId('51af3f76f7c07966503bf5fa') } planSummary: IXSCAN { files_id: 1, n: 1 } ntoskip:0 nscanned:1 nscannedObjects:1 keyUpdates:0 numYields:1 locks(micros) r:1047788 nreturned:1 reslen:8388690 1063ms
2016-02-19T01:27:56.384+0500 [conn261] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:79 974ms
2016-02-19T01:27:56.384+0500 [conn261] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:63 reslen:140 974ms
2016-02-19T01:27:56.404+0500 [conn235] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:71 881ms
2016-02-19T01:27:56.404+0500 [conn235] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:66 reslen:140 881ms
2016-02-19T01:27:56.404+0500 [conn327] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:78 880ms
2016-02-19T01:27:56.404+0500 [conn327] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:64 reslen:140 880ms
2016-02-19T01:27:56.405+0500 [conn284] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:70 825ms
2016-02-19T01:27:56.405+0500 [conn284] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:65 reslen:140 825ms
2016-02-19T01:27:56.405+0500 [conn265] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:59 667ms
2016-02-19T01:27:56.405+0500 [conn265] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:52 reslen:140 667ms
2016-02-19T01:27:56.405+0500 [conn243] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:69 650ms
2016-02-19T01:27:56.405+0500 [conn243] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:64 reslen:140 650ms
2016-02-19T01:27:56.405+0500 [conn1112] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:56 475ms
2016-02-19T01:27:56.405+0500 [conn1112] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:49 reslen:140 475ms
2016-02-19T01:27:56.405+0500 [conn245] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:69 445ms
2016-02-19T01:27:56.405+0500 [conn245] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:67 reslen:140 445ms
2016-02-19T01:27:56.405+0500 [conn1111] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:60 433ms
2016-02-19T01:27:56.405+0500 [conn1111] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:50 reslen:140 433ms
2016-02-19T01:27:56.406+0500 [conn349] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:56 430ms
2016-02-19T01:27:56.406+0500 [conn349] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:68 reslen:140 430ms
2016-02-19T01:27:56.406+0500 [conn277] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:52 423ms
2016-02-19T01:27:56.406+0500 [conn277] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:47 reslen:140 424ms
2016-02-19T01:27:56.406+0500 [conn263] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:67 417ms
2016-02-19T01:27:56.406+0500 [conn132] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) r:45 reslen:4003 295ms
2016-02-19T01:27:56.406+0500 [conn263] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:65 reslen:140 417ms
2016-02-19T01:27:56.406+0500 [conn350] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:66 406ms
2016-02-19T01:27:56.406+0500 [conn350] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:83 reslen:140 406ms
2016-02-19T01:27:56.406+0500 [conn249] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:66 979ms
2016-02-19T01:27:56.406+0500 [conn249] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:64 reslen:140 979ms
2016-02-19T01:27:56.406+0500 [conn1113] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:56 446ms
2016-02-19T01:27:56.406+0500 [conn1113] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:48 reslen:140 446ms
2016-02-19T01:27:57.868+0500 [clientcursormon] mem (MB) res:1743 virt:14428941
2016-02-19T01:27:57.869+0500 [clientcursormon]  mapped (incl journal view):14427804
2016-02-19T01:27:57.869+0500 [clientcursormon]  connections:300
2016-02-19T01:27:57.869+0500 [clientcursormon]  replication threads:32
2016-02-19T01:28:18.452+0500 [conn77] query grid_fs.fs.chunks query: { n: 0, files_id: ObjectId('55cb3330f7c07962429c712c') } planSummary: IXSCAN { files_id: 1, n: 1 } ntoskip:0 nscanned:1 nscannedObjects:1 keyUpdates:0 numYields:1 locks(micros) r:50305 nreturned:1 reslen:262226 116ms
2016-02-19T01:28:22.537+0500 [conn20] query grid_fs.fs.chunks query: { n: 5, files_id: ObjectId('55cb3330f7c07962429c712f') } planSummary: IXSCAN { files_id: 1, n: 1 } ntoskip:0 nscanned:1 nscannedObjects:1 keyUpdates:0 numYields:1 locks(micros) r:80523 nreturned:1 reslen:212307 131ms
2016-02-19T01:28:34.732+0500 [conn66] query grid_fs.fs.files query: { _id: ObjectId('56ab1a80e8d7b65ee53a3411') } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 numYields:1 locks(micros) r:97744 nreturned:1 reslen:544 102ms
2016-02-19T01:28:49.449+0500 [conn540] query grid_fs.fs.files query: { _id: ObjectId('530afc18f7c07905d64a095f') } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 numYields:1 locks(micros) r:127482 nreturned:1 reslen:573 117ms
2016-02-19T01:28:54.714+0500 [initandlisten] connection accepted from 79.110.251.109:33332 #1199 (300 connections now open)
2016-02-19T01:28:58.813+0500 [conn5] command admin.$cmd command: writebacklisten { writebacklisten: ObjectId('56bb308a4698659faded9cfe') } ntoreturn:1 keyUpdates:0 numYields:0  reslen:44 300000ms
2016-02-19T01:28:59.294+0500 [conn6] command admin.$cmd command: writebacklisten { writebacklisten: ObjectId('56bb308a8e9eeea98c8f359d') } ntoreturn:1 keyUpdates:0 numYields:0  reslen:44 300000ms
2016-02-19T01:29:00.177+0500 [conn15] command admin.$cmd command: writebacklisten { writebacklisten: ObjectId('56bb3085a399feeb6ce8461c') } ntoreturn:1 keyUpdates:0 numYields:0  reslen:44 300000ms
2016-02-19T01:29:01.354+0500 [conn16] command admin.$cmd command: writebacklisten { writebacklisten: ObjectId('56c4baa6f64c265b5906e256') } ntoreturn:1 keyUpdates:0 numYields:0  reslen:44 300000ms
2016-02-19T01:29:02.813+0500 [conn13] command admin.$cmd command: writebacklisten { writebacklisten: ObjectId('56bb30874a4d3cd7df575d3a') } ntoreturn:1 keyUpdates:0 numYields:0  reslen:44 300000ms
2016-02-19T01:29:04.820+0500 [conn48] command admin.$cmd command: writebacklisten { writebacklisten: ObjectId('56bb3083a81a8dd0f534ec99') } ntoreturn:1 keyUpdates:0 numYields:0  reslen:44 300000ms
2016-02-19T01:29:04.990+0500 [conn54] query grid_fs.fs.chunks query: { n: 17, files_id: ObjectId('54cb2cc2e8d7b638b8b42687') } planSummary: IXSCAN { files_id: 1, n: 1 } ntoskip:0 nscanned:1 nscannedObjects:1 keyUpdates:0 numYields:1 locks(micros) r:15970 nreturned:1 reslen:262226 142ms
2016-02-19T01:29:05.275+0500 [conn55] command admin.$cmd command: writebacklisten { writebacklisten: ObjectId('56bb3084a7a9adfdf698683c') } ntoreturn:1 keyUpdates:0 numYields:0  reslen:44 300000ms
...
2016-02-19T01:29:18.853+0500 [conn541] remotely refreshing metadata for grid_fs.fs.chunks based on current shard version 199468|1165||54db1dc16d40a376233899ee, current metadata version is 199468|1165||54db1dc16d40a376233899ee
2016-02-19T01:29:18.921+0500 [conn541] metadata of collection grid_fs.fs.chunks already up to date (shard version : 199468|1165||54db1dc16d40a376233899ee, took 67ms)
2016-02-19T01:29:18.933+0500 [migrateThread] starting receiving-end of migration of chunk { files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 } -> { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 } for collection grid_fs.fs.chunks from unistore-2/79.110.251.108:27017,79.172.49.114:27017 at epoch 54db1dc16d40a376233899ee
2016-02-19T01:29:18.935+0500 [migrateThread] Assertion: 10334:BSONObj size: 0 (0x0) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO
2016-02-19T01:29:18.944+0500 [migrateThread] grid_fs.fs.chunks 0x121d451 0x11bcbb9 0x11a1916 0x11a1e7c 0x7763d3 0xab06bc 0xd8f7df 0xa5844e 0x1004d82 0x1009ef8 0xff6b19 0x1262219 0x7f2345c2eb50 0x7f2344fd1a7d
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x121d451]
 /usr/bin/mongod(_ZN5mongo10logContextEPKc+0x159) [0x11bcbb9]
 /usr/bin/mongod(_ZN5mongo11msgassertedEiPKc+0xe6) [0x11a1916]
 /usr/bin/mongod() [0x11a1e7c]
 /usr/bin/mongod(_ZNK5mongo7BSONObj14_assertInvalidEv+0x473) [0x7763d3]
 /usr/bin/mongod(_ZN5mongo10FetchStage4workEPm+0x4dc) [0xab06bc]
 /usr/bin/mongod(_ZN5mongo12PlanExecutor7getNextEPNS_7BSONObjEPNS_7DiskLocE+0xef) [0xd8f7df]
 /usr/bin/mongod(_ZN5mongo7Helpers11removeRangeERKNS_8KeyRangeEbbPNS0_11RemoveSaverEbb+0x60e) [0xa5844e]
 /usr/bin/mongod(_ZN5mongo13MigrateStatus3_goEv+0x1ef2) [0x1004d82]
 /usr/bin/mongod(_ZN5mongo13MigrateStatus2goEv+0x28) [0x1009ef8]
 /usr/bin/mongod(_ZN5mongo13migrateThreadEv+0x59) [0xff6b19]
 /usr/bin/mongod() [0x1262219]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7f2345c2eb50]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f2344fd1a7d]
2016-02-19T01:29:18.944+0500 [migrateThread] scoped connection to unistore-2/79.110.251.108:27017,79.172.49.114:27017 not being returned to the pool
2016-02-19T01:29:18.944+0500 [migrateThread] about to log metadata event: { _id: "spain-2016-02-18T20:29:18-56c6299eec9c57d52f9ef480", server: "spain", clientAddr: ":27017", time: new Date(1455827358944), what: "moveChunk.to", ns: "grid_fs.fs.chunks", details: { min: { files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, max: { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }, step 1 of 5: 1, note: "aborted" } }
2016-02-19T01:29:18.954+0500 [migrateThread] ERROR: migrate failed: BSONObj size: 0 (0x0) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO
2016-02-19T01:29:18.954+0500 [migrateThread] warning: cannot remove pending chunk [{ files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }), this shard does not contain the chunk
2016-02-19T01:29:18.954+0500 [migrateThread] warning: cannot remove pending chunk [{ files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }), this shard does not contain the chunk
2016-02-19T01:29:20.532+0500 [conn68] query grid_fs.fs.files query: { _id: ObjectId('53b665a7e8d7b6046c9d31f4') } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 numYields:1 locks(micros) r:173 nreturned:1 reslen:574 109ms
...
2016-02-19T01:29:27.427+0500 [conn85] command admin.$cmd command: writebacklisten { writebacklisten: ObjectId('56bb3076e9916c5f9c0f723b') } ntoreturn:1 keyUpdates:0 numYields:0  reslen:44 300000ms
2016-02-19T01:29:27.468+0500 [conn21] query grid_fs.fs.chunks query: { n: 13, files_id: ObjectId('530f10d0e8d7b60d4a29da45') } planSummary: IXSCAN { files_id: 1, n: 1 } ntoskip:0 nscanned:1 nscannedObjects:1 keyUpdates:0 numYields:1 locks(micros) r:65331 nreturned:1 reslen:262226 119ms
2016-02-19T01:29:27.489+0500 [conn76] query grid_fs.fs.chunks query: { n: 0, files_id: ObjectId('50e937887efa6e0f41c132d4') } planSummary: IXSCAN { files_id: 1, n: 1 } ntoskip:0 nscanned:1 nscannedObjects:1 keyUpdates:0 numYields:1 locks(micros) r:2045 nreturned:1 reslen:86870 118ms
...
2016-02-19T01:29:54.011+0500 [rsHealthPoll] replSet member 79.110.251.107:27017 is now in state RECOVERING
...
2016-02-19T01:29:56.011+0500 [rsHealthPoll] replSet member 79.110.251.107:27017 is now in state SECONDARY

Comment by Metikov Vadim [ 18/Feb/16 ]

2016-02-19T01:27:41.195+0500 [migrateThread] about to log metadata event: { _id: "spain-2016-02-18T20:27:41-56c6293dec9c57d52f9eef5e", server: "spain", clientAddr: ":27017", time: new Date(1455827261195), what: "moveChunk.to", ns: "grid_fs.fs.chunks", details: { min: { files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, max: { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }, step 1 of 5: 1, note: "aborted" } }
2016-02-19T01:27:41.386+0500 [migrateThread] ERROR: migrate failed: BSONObj size: 0 (0x0) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO
2016-02-19T01:27:41.386+0500 [migrateThread] warning: cannot remove pending chunk [{ files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }), this shard does not contain the chunk
2016-02-19T01:27:41.386+0500 [migrateThread] warning: cannot remove pending chunk [{ files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }), this shard does not contain the chunk
2016-02-19T01:27:43.406+0500 [conn178] query grid_fs.fs.chunks query: { n: 1, files_id: ObjectId('51af3f76f7c07966503bf5fa') } planSummary: IXSCAN { files_id: 1, n: 1 } ntoskip:0 nscanned:1 nscannedObjects:1 keyUpdates:0 numYields:1 locks(micros) r:968241 nreturned:1 reslen:8388690 985ms
2016-02-19T01:27:43.407+0500 [conn235] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:75 958ms
2016-02-19T01:27:43.407+0500 [conn235] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:64 reslen:140 958ms
2016-02-19T01:27:43.425+0500 [conn326] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:67 584ms
2016-02-19T01:27:43.425+0500 [conn326] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:78 reslen:140 584ms
2016-02-19T01:27:43.425+0500 [conn277] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:74 417ms
2016-02-19T01:27:43.425+0500 [conn277] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:63 reslen:140 417ms
2016-02-19T01:27:43.425+0500 [conn327] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:66 821ms
2016-02-19T01:27:43.425+0500 [conn327] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:63 reslen:140 821ms
2016-02-19T01:27:43.425+0500 [conn324] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:49 624ms
2016-02-19T01:27:43.426+0500 [conn324] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:83 reslen:140 624ms
2016-02-19T01:27:43.426+0500 [conn284] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:36 344ms
2016-02-19T01:27:43.426+0500 [conn284] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:71 reslen:140 344ms
2016-02-19T01:27:43.427+0500 [conn534] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:69 532ms
2016-02-19T01:27:43.427+0500 [conn534] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:80 reslen:140 533ms
2016-02-19T01:27:47.422+0500 [initandlisten] connection accepted from 127.0.0.1:34161 #1167 (301 connections now open)
2016-02-19T01:27:47.517+0500 [conn178] query grid_fs.fs.chunks query: { n: 2, files_id: ObjectId('51af3f76f7c07966503bf5fa') } planSummary: IXSCAN { files_id: 1, n: 1 } ntoskip:0 nscanned:1 nscannedObjects:1 keyUpdates:0 numYields:1 locks(micros) r:778123 nreturned:1 reslen:8388690 851ms
2016-02-19T01:27:47.517+0500 [conn277] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:62 718ms
2016-02-19T01:27:47.517+0500 [conn277] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:63 reslen:140 718ms
2016-02-19T01:27:47.518+0500 [conn326] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:58 656ms
2016-02-19T01:27:47.518+0500 [conn326] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:80 reslen:140 656ms
2016-02-19T01:27:47.518+0500 [conn235] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:73 539ms
2016-02-19T01:27:47.518+0500 [conn235] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:66 reslen:140 539ms
2016-02-19T01:27:47.518+0500 [conn327] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:69 459ms
2016-02-19T01:27:47.518+0500 [conn327] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:82 reslen:140 459ms
2016-02-19T01:27:47.518+0500 [conn245] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:57 329ms
2016-02-19T01:27:47.518+0500 [conn245] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:69 reslen:140 329ms
2016-02-19T01:27:47.518+0500 [conn323] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:59 308ms
2016-02-19T01:27:47.519+0500 [conn323] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:93 reslen:140 308ms
2016-02-19T01:27:47.519+0500 [conn261] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:41 169ms
2016-02-19T01:27:47.519+0500 [conn261] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:29 reslen:140 170ms
2016-02-19T01:27:47.519+0500 [conn284] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:30 549ms
2016-02-19T01:27:47.519+0500 [conn284] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:24 reslen:140 549ms
2016-02-19T01:27:47.519+0500 [conn328] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:50 392ms
2016-02-19T01:27:47.519+0500 [conn328] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:72 reslen:140 392ms
2016-02-19T01:27:47.519+0500 [conn243] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:43 308ms
2016-02-19T01:27:47.519+0500 [conn243] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:38 reslen:140 309ms
2016-02-19T01:27:47.519+0500 [conn324] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:36 690ms
2016-02-19T01:27:47.519+0500 [conn324] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:64 reslen:140 690ms
2016-02-19T01:27:47.521+0500 [conn1167] end connection 127.0.0.1:34161 (300 connections now open)
2016-02-19T01:27:47.566+0500 [initandlisten] connection accepted from 127.0.0.1:34162 #1168 (301 connections now open)
2016-02-19T01:27:47.571+0500 [conn1168] end connection 127.0.0.1:34162 (300 connections now open)
2016-02-19T01:27:48.985+0500 [conn20] query grid_fs.fs.chunks query: { n: 0, files_id: ObjectId('535df611f7c0796bbb5704fa') } planSummary: IXSCAN { files_id: 1, n: 1 } ntoskip:0 nscanned:1 nscannedObjects:1 keyUpdates:0 numYields:1 locks(micros) r:34607 nreturned:1 reslen:262226 104ms
2016-02-19T01:27:51.013+0500 [conn541] remotely refreshing metadata for grid_fs.fs.chunks based on current shard version 199468|1165||54db1dc16d40a376233899ee, current metadata version is 199468|1165||54db1dc16d40a376233899ee
2016-02-19T01:27:51.972+0500 [conn178] query grid_fs.fs.chunks query: { n: 3, files_id: ObjectId('51af3f76f7c07966503bf5fa') } planSummary: IXSCAN { files_id: 1, n: 1 } ntoskip:0 nscanned:1 nscannedObjects:1 keyUpdates:0 numYields:1 locks(micros) r:1074793 nreturned:1 reslen:8388690 1084ms
2016-02-19T01:27:51.973+0500 [conn272] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:80 1019ms
2016-02-19T01:27:51.973+0500 [conn272] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:66 reslen:140 1020ms
2016-02-19T01:27:51.973+0500 [conn328] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:59 1014ms
2016-02-19T01:27:51.973+0500 [conn328] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:45 reslen:140 1014ms
2016-02-19T01:27:51.973+0500 [conn245] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:69 973ms
2016-02-19T01:27:51.973+0500 [conn245] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:78 reslen:140 973ms
2016-02-19T01:27:51.973+0500 [conn277] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:82 906ms
2016-02-19T01:27:51.973+0500 [conn277] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:68 reslen:140 906ms
2016-02-19T01:27:51.973+0500 [conn541] metadata of collection grid_fs.fs.chunks already up to date (shard version : 199468|1165||54db1dc16d40a376233899ee, took 61ms)
2016-02-19T01:27:51.973+0500 [conn284] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:61 854ms
2016-02-19T01:27:51.973+0500 [conn284] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:59 reslen:140 854ms
2016-02-19T01:27:51.974+0500 [conn326] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:63 836ms
2016-02-19T01:27:51.974+0500 [conn326] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:58 reslen:140 836ms
2016-02-19T01:27:51.974+0500 [conn243] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:62 755ms
2016-02-19T01:27:51.974+0500 [conn243] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:58 reslen:140 755ms
2016-02-19T01:27:51.974+0500 [conn235] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:63 699ms
2016-02-19T01:27:51.974+0500 [conn235] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:57 reslen:140 699ms
2016-02-19T01:27:51.974+0500 [conn261] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:61 678ms
2016-02-19T01:27:51.974+0500 [conn261] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:60 reslen:140 678ms
2016-02-19T01:27:51.974+0500 [conn327] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:61 665ms
2016-02-19T01:27:51.974+0500 [conn327] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:57 reslen:140 665ms
2016-02-19T01:27:51.974+0500 [conn324] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:60 571ms
2016-02-19T01:27:51.974+0500 [conn324] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:56 reslen:140 571ms
2016-02-19T01:27:51.974+0500 [conn264] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:60 443ms
2016-02-19T01:27:51.975+0500 [conn264] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:60 reslen:140 443ms
2016-02-19T01:27:51.975+0500 [conn263] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:48 340ms
2016-02-19T01:27:51.975+0500 [conn263] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:45 reslen:140 340ms
2016-02-19T01:27:51.975+0500 [conn1112] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:50 151ms
2016-02-19T01:27:51.975+0500 [conn1112] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:58 reslen:140 151ms
2016-02-19T01:27:51.975+0500 [conn1111] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:59 149ms
2016-02-19T01:27:51.975+0500 [conn1111] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:61 reslen:140 150ms
2016-02-19T01:27:51.975+0500 [conn1113] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:46 150ms
2016-02-19T01:27:51.975+0500 [conn1113] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:46 reslen:140 150ms
2016-02-19T01:27:51.975+0500 [conn349] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:48 149ms
2016-02-19T01:27:51.975+0500 [conn349] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:42 reslen:140 149ms
2016-02-19T01:27:51.975+0500 [conn247] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:47 117ms
2016-02-19T01:27:51.975+0500 [conn247] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:43 reslen:140 117ms
2016-02-19T01:27:51.975+0500 [conn323] insert grid_fs.system.indexes query: { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ninserted:0 keyUpdates:0 numYields:0 locks(micros) w:46 986ms
2016-02-19T01:27:51.975+0500 [conn323] command grid_fs.$cmd command: insert { insert: "system.indexes", documents: [ { unique: true, ns: "grid_fs.fs.chunks", name: "files_id_1_n_1", key: { files_id: 1, n: 1 } } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "unistore-1", shardVersion: [ Timestamp 0|0, ObjectId('00000000ffffffffffffffff') ], session: 0 } } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:40 reslen:140 986ms
2016-02-19T01:27:51.990+0500 [conn541] command admin.$cmd command: _recvChunkStart { _recvChunkStart: "grid_fs.fs.chunks", from: "unistore-2/79.110.251.108:27017,79.172.49.114:27017", min: { files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, max: { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }, shardKeyPattern: { files_id: 1.0, n: 1.0 }, configServer: "79.172.49.130:27019,79.172.49.90:27019,79.172.49.50:27019", secondaryThrottle: true } ntoreturn:1 keyUpdates:0 numYields:0 locks(micros) w:3 reslen:47 976ms
2016-02-19T01:27:51.990+0500 [migrateThread] starting receiving-end of migration of chunk { files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 } -> { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 } for collection grid_fs.fs.chunks from unistore-2/79.110.251.108:27017,79.172.49.114:27017 at epoch 54db1dc16d40a376233899ee
2016-02-19T01:27:52.029+0500 [migrateThread] Assertion: 10334:BSONObj size: 0 (0x0) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO
2016-02-19T01:27:52.036+0500 [migrateThread] grid_fs.fs.chunks 0x121d451 0x11bcbb9 0x11a1916 0x11a1e7c 0x7763d3 0xab06bc 0xd8f7df 0xa5844e 0x1004d82 0x1009ef8 0xff6b19 0x1262219 0x7f2345c2eb50 0x7f2344fd1a7d
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x121d451]
 /usr/bin/mongod(_ZN5mongo10logContextEPKc+0x159) [0x11bcbb9]
 /usr/bin/mongod(_ZN5mongo11msgassertedEiPKc+0xe6) [0x11a1916]
 /usr/bin/mongod() [0x11a1e7c]
 /usr/bin/mongod(_ZNK5mongo7BSONObj14_assertInvalidEv+0x473) [0x7763d3]
 /usr/bin/mongod(_ZN5mongo10FetchStage4workEPm+0x4dc) [0xab06bc]
 /usr/bin/mongod(_ZN5mongo12PlanExecutor7getNextEPNS_7BSONObjEPNS_7DiskLocE+0xef) [0xd8f7df]
 /usr/bin/mongod(_ZN5mongo7Helpers11removeRangeERKNS_8KeyRangeEbbPNS0_11RemoveSaverEbb+0x60e) [0xa5844e]
 /usr/bin/mongod(_ZN5mongo13MigrateStatus3_goEv+0x1ef2) [0x1004d82]
 /usr/bin/mongod(_ZN5mongo13MigrateStatus2goEv+0x28) [0x1009ef8]
 /usr/bin/mongod(_ZN5mongo13migrateThreadEv+0x59) [0xff6b19]
 /usr/bin/mongod() [0x1262219]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7f2345c2eb50]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f2344fd1a7d]

Comment by Metikov Vadim [ 18/Feb/16 ]

As you can see replication stops on file "grid_fs.12" and third node become SECONDARY.
But 1st and 2nd nodes (primary and secondary) have 3500+ files of grid_fs database.

MongoDB server version 2.6.11 have only this logs on primary node("spain") in that time:
2016-02-19T01:25:08.658+0500 [rsHealthPoll] replSet member 79.110.251.107:27017 is up
2016-02-19T01:25:08.658+0500 [rsHealthPoll] replSet member 79.110.251.107:27017 is now in state STARTUP2
2016-02-19T01:25:09.075+0500 [initandlisten] connection accepted from 79.110.251.107:60592 #1110 (295 connections now open)
2016-02-19T01:25:13.359+0500 [initandlisten] connection accepted from 79.172.49.169:52962 #1111 (296 connections now open)
2016-02-19T01:25:13.360+0500 [initandlisten] connection accepted from 79.172.49.169:52960 #1112 (297 connections now open)
2016-02-19T01:25:13.360+0500 [initandlisten] connection accepted from 79.172.49.169:52961 #1113 (298 connections now open)
2016-02-19T01:25:13.360+0500 [initandlisten] connection accepted from 79.172.49.169:52959 #1114 (299 connections now open)
2016-02-19T01:25:13.361+0500 [initandlisten] connection accepted from 79.172.49.169:52963 #1115 (300 connections now open)
2016-02-19T01:25:21.043+0500 [initandlisten] connection accepted from 127.0.0.1:34086 #1116 (301 connections now open)
2016-02-19T01:25:21.048+0500 [conn1116] end connection 127.0.0.1:34086 (300 connections now open)
2016-02-19T01:25:21.096+0500 [initandlisten] connection accepted from 127.0.0.1:34087 #1117 (301 connections now open)
2016-02-19T01:25:21.102+0500 [conn1117] end connection 127.0.0.1:34087 (300 connections now open)
2016-02-19T01:25:21.147+0500 [initandlisten] connection accepted from 127.0.0.1:34088 #1118 (301 connections now open)
2016-02-19T01:25:21.153+0500 [conn1118] end connection 127.0.0.1:34088 (300 connections now open)
2016-02-19T01:25:21.197+0500 [initandlisten] connection accepted from 127.0.0.1:34089 #1119 (301 connections now open)
2016-02-19T01:25:21.202+0500 [conn1119] end connection 127.0.0.1:34089 (300 connections now open)
2016-02-19T01:25:23.941+0500 [conn1107] end connection 79.110.251.109:33230 (299 connections now open)
2016-02-19T01:25:23.942+0500 [initandlisten] connection accepted from 79.110.251.109:33282 #1120 (300 connections now open)
2016-02-19T01:25:25.445+0500 [conn1108] end connection 79.172.49.90:44046 (299 connections now open)
2016-02-19T01:25:26.431+0500 [initandlisten] connection accepted from 79.172.49.90:49857 #1121 (300 connections now open)
2016-02-19T01:25:37.082+0500 [conn1110] end connection 79.110.251.107:60592 (299 connections now open)
2016-02-19T01:25:37.083+0500 [initandlisten] connection accepted from 79.110.251.107:60602 #1122 (300 connections now open)
2016-02-19T01:25:47.635+0500 [initandlisten] connection accepted from 127.0.0.1:34096 #1123 (301 connections now open)
2016-02-19T01:25:47.641+0500 [conn1123] end connection 127.0.0.1:34096 (300 connections now open)
2016-02-19T01:25:47.689+0500 [initandlisten] connection accepted from 127.0.0.1:34097 #1124 (301 connections now open)
2016-02-19T01:25:47.694+0500 [conn1124] end connection 127.0.0.1:34097 (300 connections now open)
2016-02-19T01:25:52.997+0500 [initandlisten] connection accepted from 127.0.0.1:34099 #1125 (301 connections now open)
2016-02-19T01:25:53.002+0500 [conn1125] end connection 127.0.0.1:34099 (300 connections now open)
2016-02-19T01:25:53.046+0500 [initandlisten] connection accepted from 127.0.0.1:34100 #1126 (301 connections now open)
2016-02-19T01:25:53.052+0500 [conn1126] end connection 127.0.0.1:34100 (300 connections now open)
2016-02-19T01:25:53.102+0500 [initandlisten] connection accepted from 127.0.0.1:34101 #1127 (301 connections now open)
2016-02-19T01:25:53.108+0500 [conn1127] end connection 127.0.0.1:34101 (300 connections now open)
2016-02-19T01:25:53.152+0500 [initandlisten] connection accepted from 127.0.0.1:34102 #1128 (301 connections now open)
2016-02-19T01:25:53.159+0500 [conn1128] end connection 127.0.0.1:34102 (300 connections now open)
2016-02-19T01:25:53.208+0500 [initandlisten] connection accepted from 127.0.0.1:34103 #1129 (301 connections now open)
2016-02-19T01:25:53.215+0500 [conn1129] end connection 127.0.0.1:34103 (300 connections now open)
2016-02-19T01:25:53.259+0500 [initandlisten] connection accepted from 127.0.0.1:34104 #1130 (301 connections now open)
2016-02-19T01:25:53.265+0500 [conn1130] end connection 127.0.0.1:34104 (300 connections now open)
2016-02-19T01:25:53.309+0500 [initandlisten] connection accepted from 127.0.0.1:34105 #1131 (301 connections now open)
2016-02-19T01:25:53.316+0500 [conn1131] end connection 127.0.0.1:34105 (300 connections now open)
2016-02-19T01:25:53.373+0500 [initandlisten] connection accepted from 127.0.0.1:34106 #1132 (301 connections now open)
2016-02-19T01:25:53.378+0500 [conn1132] end connection 127.0.0.1:34106 (300 connections now open)
2016-02-19T01:25:53.427+0500 [initandlisten] connection accepted from 127.0.0.1:34107 #1133 (301 connections now open)
2016-02-19T01:25:53.433+0500 [conn1133] end connection 127.0.0.1:34107 (300 connections now open)
2016-02-19T01:25:54.061+0500 [conn1120] end connection 79.110.251.109:33282 (299 connections now open)
2016-02-19T01:25:54.061+0500 [initandlisten] connection accepted from 79.110.251.109:33288 #1134 (300 connections now open)
2016-02-19T01:25:56.748+0500 [conn1121] end connection 79.172.49.90:49857 (299 connections now open)
2016-02-19T01:25:56.766+0500 [initandlisten] connection accepted from 79.172.49.90:55792 #1135 (300 connections now open)
2016-02-19T01:25:57.850+0500 [clientcursormon] mem (MB) res:1562 virt:14428941
2016-02-19T01:25:57.850+0500 [clientcursormon]  mapped (incl journal view):14427804
2016-02-19T01:25:57.850+0500 [clientcursormon]  connections:300
2016-02-19T01:25:57.850+0500 [clientcursormon]  replication threads:32
2016-02-19T01:25:58.840+0500 [conn117] query grid_fs.fs.chunks query: { n: 0, files_id: ObjectId('56bae0bae8d7b638b0dd4c04') } planSummary: IXSCAN { files_id: 1, n: 1 } ntoskip:0 nscanned:1 nscannedObjects:1 keyUpdates:0 numYields:1 locks(micros) r:65755 nreturned:1 reslen:262226 106ms
2016-02-19T01:26:03.062+0500 [conn541] remotely refreshing metadata for grid_fs.fs.chunks based on current shard version 199468|1165||54db1dc16d40a376233899ee, current metadata version is 199468|1165||54db1dc16d40a376233899ee
2016-02-19T01:26:03.131+0500 [conn541] metadata of collection grid_fs.fs.chunks already up to date (shard version : 199468|1165||54db1dc16d40a376233899ee, took 68ms)
2016-02-19T01:26:03.144+0500 [migrateThread] starting receiving-end of migration of chunk { files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 } -> { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 } for collection grid_fs.fs.chunks from unistore-2/79.110.251.108:27017,79.172.49.114:27017 at epoch 54db1dc16d40a376233899ee
2016-02-19T01:26:03.145+0500 [migrateThread] Assertion: 10334:BSONObj size: 0 (0x0) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO
2016-02-19T01:26:03.150+0500 [migrateThread] grid_fs.fs.chunks 0x121d451 0x11bcbb9 0x11a1916 0x11a1e7c 0x7763d3 0xab06bc 0xd8f7df 0xa5844e 0x1004d82 0x1009ef8 0xff6b19 0x1262219 0x7f2345c2eb50 0x7f2344fd1a7d
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x121d451]
 /usr/bin/mongod(_ZN5mongo10logContextEPKc+0x159) [0x11bcbb9]
 /usr/bin/mongod(_ZN5mongo11msgassertedEiPKc+0xe6) [0x11a1916]
 /usr/bin/mongod() [0x11a1e7c]
 /usr/bin/mongod(_ZNK5mongo7BSONObj14_assertInvalidEv+0x473) [0x7763d3]
 /usr/bin/mongod(_ZN5mongo10FetchStage4workEPm+0x4dc) [0xab06bc]
 /usr/bin/mongod(_ZN5mongo12PlanExecutor7getNextEPNS_7BSONObjEPNS_7DiskLocE+0xef) [0xd8f7df]
 /usr/bin/mongod(_ZN5mongo7Helpers11removeRangeERKNS_8KeyRangeEbbPNS0_11RemoveSaverEbb+0x60e) [0xa5844e]
 /usr/bin/mongod(_ZN5mongo13MigrateStatus3_goEv+0x1ef2) [0x1004d82]
 /usr/bin/mongod(_ZN5mongo13MigrateStatus2goEv+0x28) [0x1009ef8]
 /usr/bin/mongod(_ZN5mongo13migrateThreadEv+0x59) [0xff6b19]
 /usr/bin/mongod() [0x1262219]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7f2345c2eb50]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f2344fd1a7d]
2016-02-19T01:26:03.150+0500 [migrateThread] scoped connection to unistore-2/79.110.251.108:27017,79.172.49.114:27017 not being returned to the pool
2016-02-19T01:26:03.150+0500 [migrateThread] about to log metadata event: { _id: "spain-2016-02-18T20:26:03-56c628dbec9c57d52f9eeb57", server: "spain", clientAddr: ":27017", time: new Date(1455827163150), what: "moveChunk.to", ns: "grid_fs.fs.chunks", details: { min: { files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, max: { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }, step 1 of 5: 1, note: "aborted" } }
2016-02-19T01:26:03.358+0500 [migrateThread] ERROR: migrate failed: BSONObj size: 0 (0x0) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO
2016-02-19T01:26:03.358+0500 [migrateThread] warning: cannot remove pending chunk [{ files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }), this shard does not contain the chunk
2016-02-19T01:26:03.358+0500 [migrateThread] warning: cannot remove pending chunk [{ files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }), this shard does not contain the chunk
2016-02-19T01:26:07.092+0500 [conn1122] end connection 79.110.251.107:60602 (299 connections now open)
2016-02-19T01:26:07.092+0500 [initandlisten] connection accepted from 79.110.251.107:60610 #1136 (300 connections now open)
2016-02-19T01:26:12.816+0500 [conn541] remotely refreshing metadata for grid_fs.fs.chunks based on current shard version 199468|1165||54db1dc16d40a376233899ee, current metadata version is 199468|1165||54db1dc16d40a376233899ee
2016-02-19T01:26:12.879+0500 [conn541] metadata of collection grid_fs.fs.chunks already up to date (shard version : 199468|1165||54db1dc16d40a376233899ee, took 62ms)
2016-02-19T01:26:12.891+0500 [migrateThread] starting receiving-end of migration of chunk { files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 } -> { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 } for collection grid_fs.fs.chunks from unistore-2/79.110.251.108:27017,79.172.49.114:27017 at epoch 54db1dc16d40a376233899ee
2016-02-19T01:26:12.893+0500 [migrateThread] Assertion: 10334:BSONObj size: 0 (0x0) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO
2016-02-19T01:26:12.898+0500 [migrateThread] grid_fs.fs.chunks 0x121d451 0x11bcbb9 0x11a1916 0x11a1e7c 0x7763d3 0xab06bc 0xd8f7df 0xa5844e 0x1004d82 0x1009ef8 0xff6b19 0x1262219 0x7f2345c2eb50 0x7f2344fd1a7d
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x121d451]
 /usr/bin/mongod(_ZN5mongo10logContextEPKc+0x159) [0x11bcbb9]
 /usr/bin/mongod(_ZN5mongo11msgassertedEiPKc+0xe6) [0x11a1916]
 /usr/bin/mongod() [0x11a1e7c]
 /usr/bin/mongod(_ZNK5mongo7BSONObj14_assertInvalidEv+0x473) [0x7763d3]
 /usr/bin/mongod(_ZN5mongo10FetchStage4workEPm+0x4dc) [0xab06bc]
 /usr/bin/mongod(_ZN5mongo12PlanExecutor7getNextEPNS_7BSONObjEPNS_7DiskLocE+0xef) [0xd8f7df]
 /usr/bin/mongod(_ZN5mongo7Helpers11removeRangeERKNS_8KeyRangeEbbPNS0_11RemoveSaverEbb+0x60e) [0xa5844e]
 /usr/bin/mongod(_ZN5mongo13MigrateStatus3_goEv+0x1ef2) [0x1004d82]
 /usr/bin/mongod(_ZN5mongo13MigrateStatus2goEv+0x28) [0x1009ef8]
 /usr/bin/mongod(_ZN5mongo13migrateThreadEv+0x59) [0xff6b19]
 /usr/bin/mongod() [0x1262219]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7f2345c2eb50]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f2344fd1a7d]
2016-02-19T01:26:12.898+0500 [migrateThread] scoped connection to unistore-2/79.110.251.108:27017,79.172.49.114:27017 not being returned to the pool
2016-02-19T01:26:12.898+0500 [migrateThread] about to log metadata event: { _id: "spain-2016-02-18T20:26:12-56c628e4ec9c57d52f9eeba1", server: "spain", clientAddr: ":27017", time: new Date(1455827172898), what: "moveChunk.to", ns: "grid_fs.fs.chunks", details: { min: { files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, max: { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }, step 1 of 5: 1, note: "aborted" } }
2016-02-19T01:26:12.903+0500 [migrateThread] ERROR: migrate failed: BSONObj size: 0 (0x0) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO
2016-02-19T01:26:12.903+0500 [migrateThread] warning: cannot remove pending chunk [{ files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }), this shard does not contain the chunk
2016-02-19T01:26:12.903+0500 [migrateThread] warning: cannot remove pending chunk [{ files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }), this shard does not contain the chunk
2016-02-19T01:26:14.753+0500 [conn539] query grid_fs.fs.chunks query: { n: 3, files_id: ObjectId('51bc5d95f7c0796651406c16') } planSummary: IXSCAN { files_id: 1, n: 1 } ntoskip:0 nscanned:1 nscannedObjects:1 keyUpdates:0 numYields:0 locks(micros) r:109249 nreturned:1 reslen:262226 109ms
2016-02-19T01:26:20.943+0500 [conn541] remotely refreshing metadata for grid_fs.fs.chunks based on current shard version 199468|1165||54db1dc16d40a376233899ee, current metadata version is 199468|1165||54db1dc16d40a376233899ee
2016-02-19T01:26:21.013+0500 [conn541] metadata of collection grid_fs.fs.chunks already up to date (shard version : 199468|1165||54db1dc16d40a376233899ee, took 69ms)
2016-02-19T01:26:21.026+0500 [migrateThread] starting receiving-end of migration of chunk { files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 } -> { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 } for collection grid_fs.fs.chunks from unistore-2/79.110.251.108:27017,79.172.49.114:27017 at epoch 54db1dc16d40a376233899ee
2016-02-19T01:26:21.028+0500 [migrateThread] Assertion: 10334:BSONObj size: 0 (0x0) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO
2016-02-19T01:26:21.032+0500 [migrateThread] grid_fs.fs.chunks 0x121d451 0x11bcbb9 0x11a1916 0x11a1e7c 0x7763d3 0xab06bc 0xd8f7df 0xa5844e 0x1004d82 0x1009ef8 0xff6b19 0x1262219 0x7f2345c2eb50 0x7f2344fd1a7d
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x121d451]
 /usr/bin/mongod(_ZN5mongo10logContextEPKc+0x159) [0x11bcbb9]
 /usr/bin/mongod(_ZN5mongo11msgassertedEiPKc+0xe6) [0x11a1916]
 /usr/bin/mongod() [0x11a1e7c]
 /usr/bin/mongod(_ZNK5mongo7BSONObj14_assertInvalidEv+0x473) [0x7763d3]
 /usr/bin/mongod(_ZN5mongo10FetchStage4workEPm+0x4dc) [0xab06bc]
 /usr/bin/mongod(_ZN5mongo12PlanExecutor7getNextEPNS_7BSONObjEPNS_7DiskLocE+0xef) [0xd8f7df]
 /usr/bin/mongod(_ZN5mongo7Helpers11removeRangeERKNS_8KeyRangeEbbPNS0_11RemoveSaverEbb+0x60e) [0xa5844e]
 /usr/bin/mongod(_ZN5mongo13MigrateStatus3_goEv+0x1ef2) [0x1004d82]
 /usr/bin/mongod(_ZN5mongo13MigrateStatus2goEv+0x28) [0x1009ef8]
 /usr/bin/mongod(_ZN5mongo13migrateThreadEv+0x59) [0xff6b19]
 /usr/bin/mongod() [0x1262219]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7f2345c2eb50]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f2344fd1a7d]
2016-02-19T01:26:21.032+0500 [migrateThread] scoped connection to unistore-2/79.110.251.108:27017,79.172.49.114:27017 not being returned to the pool
2016-02-19T01:26:21.032+0500 [migrateThread] about to log metadata event: { _id: "spain-2016-02-18T20:26:21-56c628edec9c57d52f9eebe1", server: "spain", clientAddr: ":27017", time: new Date(1455827181032), what: "moveChunk.to", ns: "grid_fs.fs.chunks", details: { min: { files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, max: { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }, step 1 of 5: 1, note: "aborted" } }
2016-02-19T01:26:21.046+0500 [migrateThread] ERROR: migrate failed: BSONObj size: 0 (0x0) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO
2016-02-19T01:26:21.058+0500 [migrateThread] warning: cannot remove pending chunk [{ files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }), this shard does not contain the chunk
2016-02-19T01:26:21.058+0500 [migrateThread] warning: cannot remove pending chunk [{ files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 }, { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 }), this shard does not contain the chunk
2016-02-19T01:26:23.583+0500 [initandlisten] connection accepted from 127.0.0.1:34117 #1137 (301 connections now open)
2016-02-19T01:26:23.587+0500 [conn1137] end connection 127.0.0.1:34117 (300 connections now open)
2016-02-19T01:26:23.628+0500 [initandlisten] connection accepted from 127.0.0.1:34118 #1138 (301 connections now open)
2016-02-19T01:26:23.633+0500 [conn1138] end connection 127.0.0.1:34118 (300 connections now open)
2016-02-19T01:26:23.678+0500 [initandlisten] connection accepted from 127.0.0.1:34119 #1139 (301 connections now open)
2016-02-19T01:26:23.684+0500 [conn1139] end connection 127.0.0.1:34119 (300 connections now open)
2016-02-19T01:26:23.727+0500 [initandlisten] connection accepted from 127.0.0.1:34120 #1140 (301 connections now open)
2016-02-19T01:26:23.733+0500 [conn1140] end connection 127.0.0.1:34120 (300 connections now open)
2016-02-19T01:26:24.116+0500 [conn1134] end connection 79.110.251.109:33288 (299 connections now open)
2016-02-19T01:26:24.116+0500 [initandlisten] connection accepted from 79.110.251.109:33295 #1141 (300 connections now open)
2016-02-19T01:26:27.563+0500 [conn1135] end connection 79.172.49.90:55792 (299 connections now open)
2016-02-19T01:26:27.582+0500 [initandlisten] connection accepted from 79.172.49.90:33417 #1142 (300 connections now open)
2016-02-19T01:26:28.843+0500 [initandlisten] connection accepted from 127.0.0.1:34123 #1143 (301 connections now open)
2016-02-19T01:26:28.847+0500 [conn1143] end connection 127.0.0.1:34123 (300 connections now open)
2016-02-19T01:26:30.894+0500 [initandlisten] connection accepted from 127.0.0.1:34125 #1144 (301 connections now open)
2016-02-19T01:26:30.901+0500 [conn1144] end connection 127.0.0.1:34125 (300 connections now open)
2016-02-19T01:26:37.100+0500 [conn1136] end connection 79.110.251.107:60610 (299 connections now open)
2016-02-19T01:26:37.100+0500 [initandlisten] connection accepted from 79.110.251.107:60618 #1145 (300 connections now open)
2016-02-19T01:26:47.118+0500 [initandlisten] connection accepted from 127.0.0.1:34130 #1146 (301 connections now open)
2016-02-19T01:26:47.124+0500 [conn1146] end connection 127.0.0.1:34130 (300 connections now open)
2016-02-19T01:26:47.172+0500 [initandlisten] connection accepted from 127.0.0.1:34131 #1147 (301 connections now open)
2016-02-19T01:26:47.178+0500 [conn1147] end connection 127.0.0.1:34131 (300 connections now open)
2016-02-19T01:26:53.288+0500 [initandlisten] connection accepted from 127.0.0.1:34133 #1148 (301 connections now open)
2016-02-19T01:26:53.293+0500 [conn1148] end connection 127.0.0.1:34133 (300 connections now open)
2016-02-19T01:26:53.349+0500 [initandlisten] connection accepted from 127.0.0.1:34134 #1149 (301 connections now open)
2016-02-19T01:26:53.354+0500 [conn1149] end connection 127.0.0.1:34134 (300 connections now open)
2016-02-19T01:26:53.391+0500 [initandlisten] connection accepted from 127.0.0.1:34135 #1150 (301 connections now open)
2016-02-19T01:26:53.396+0500 [conn1150] end connection 127.0.0.1:34135 (300 connections now open)
2016-02-19T01:26:53.440+0500 [initandlisten] connection accepted from 127.0.0.1:34136 #1151 (301 connections now open)
2016-02-19T01:26:53.446+0500 [conn1151] end connection 127.0.0.1:34136 (300 connections now open)
2016-02-19T01:26:53.491+0500 [initandlisten] connection accepted from 127.0.0.1:34137 #1152 (301 connections now open)
2016-02-19T01:26:53.498+0500 [conn1152] end connection 127.0.0.1:34137 (300 connections now open)
2016-02-19T01:26:53.547+0500 [initandlisten] connection accepted from 127.0.0.1:34138 #1153 (301 connections now open)
2016-02-19T01:26:53.552+0500 [conn1153] end connection 127.0.0.1:34138 (300 connections now open)
2016-02-19T01:26:53.595+0500 [initandlisten] connection accepted from 127.0.0.1:34139 #1154 (301 connections now open)
2016-02-19T01:26:53.601+0500 [conn1154] end connection 127.0.0.1:34139 (300 connections now open)
2016-02-19T01:26:53.645+0500 [initandlisten] connection accepted from 127.0.0.1:34140 #1155 (301 connections now open)
2016-02-19T01:26:53.650+0500 [conn1155] end connection 127.0.0.1:34140 (300 connections now open)
2016-02-19T01:26:53.693+0500 [initandlisten] connection accepted from 127.0.0.1:34141 #1156 (301 connections now open)
2016-02-19T01:26:53.698+0500 [conn1156] end connection 127.0.0.1:34141 (300 connections now open)
2016-02-19T01:26:54.126+0500 [conn1141] end connection 79.110.251.109:33295 (299 connections now open)
2016-02-19T01:26:54.126+0500 [initandlisten] connection accepted from 79.110.251.109:33303 #1157 (300 connections now open)
2016-02-19T01:27:00.405+0500 [conn1142] end connection 79.172.49.90:33417 (299 connections now open)
2016-02-19T01:27:00.426+0500 [initandlisten] connection accepted from 79.172.49.90:39201 #1158 (300 connections now open)
2016-02-19T01:27:02.564+0500 [conn100] query grid_fs.fs.chunks query: { n: 0, files_id: ObjectId('56c2b3f1f7c07966bb54bf80') } planSummary: IXSCAN { files_id: 1, n: 1 } ntoskip:0 nscanned:1 nscannedObjects:1 keyUpdates:0 numYields:1 locks(micros) r:59411 nreturned:1 reslen:262226 113ms
2016-02-19T01:27:07.111+0500 [conn1145] end connection 79.110.251.107:60618 (299 connections now open)
2016-02-19T01:27:07.111+0500 [initandlisten] connection accepted from 79.110.251.107:60627 #1159 (300 connections now open)
2016-02-19T01:27:23.094+0500 [initandlisten] connection accepted from 127.0.0.1:34150 #1160 (301 connections now open)
2016-02-19T01:27:23.099+0500 [conn1160] end connection 127.0.0.1:34150 (300 connections now open)
2016-02-19T01:27:23.141+0500 [initandlisten] connection accepted from 127.0.0.1:34151 #1161 (301 connections now open)
2016-02-19T01:27:23.147+0500 [conn1161] end connection 127.0.0.1:34151 (300 connections now open)
2016-02-19T01:27:23.190+0500 [initandlisten] connection accepted from 127.0.0.1:34152 #1162 (301 connections now open)
2016-02-19T01:27:23.197+0500 [conn1162] end connection 127.0.0.1:34152 (300 connections now open)
2016-02-19T01:27:23.245+0500 [initandlisten] connection accepted from 127.0.0.1:34153 #1163 (301 connections now open)
2016-02-19T01:27:23.250+0500 [conn1163] end connection 127.0.0.1:34153 (300 connections now open)
2016-02-19T01:27:24.329+0500 [conn1157] end connection 79.110.251.109:33303 (299 connections now open)
2016-02-19T01:27:24.329+0500 [initandlisten] connection accepted from 79.110.251.109:33310 #1164 (300 connections now open)
2016-02-19T01:27:30.744+0500 [conn1158] end connection 79.172.49.90:39201 (299 connections now open)
2016-02-19T01:27:30.766+0500 [initandlisten] connection accepted from 79.172.49.90:44513 #1165 (300 connections now open)
2016-02-19T01:27:34.437+0500 [conn100] query grid_fs.fs.chunks query: { n: 0, files_id: ObjectId('50dd567d7efa6e0b9d527eaa') } planSummary: IXSCAN { files_id: 1, n: 1 } ntoskip:0 nscanned:1 nscannedObjects:1 keyUpdates:0 numYields:1 locks(micros) r:55102 nreturned:1 reslen:72548 139ms
2016-02-19T01:27:37.122+0500 [conn1159] end connection 79.110.251.107:60627 (299 connections now open)
2016-02-19T01:27:37.122+0500 [initandlisten] connection accepted from 79.110.251.107:60635 #1166 (300 connections now open)
2016-02-19T01:27:41.106+0500 [conn541] remotely refreshing metadata for grid_fs.fs.chunks based on current shard version 199468|1165||54db1dc16d40a376233899ee, current metadata version is 199468|1165||54db1dc16d40a376233899ee
2016-02-19T01:27:41.175+0500 [conn541] metadata of collection grid_fs.fs.chunks already up to date (shard version : 199468|1165||54db1dc16d40a376233899ee, took 68ms)
2016-02-19T01:27:41.188+0500 [migrateThread] starting receiving-end of migration of chunk { files_id: ObjectId('51bc71a1f7c07966503f157f'), n: 115 } -> { files_id: ObjectId('51bc71aff7c07966503f1603'), n: 0 } for collection grid_fs.fs.chunks from unistore-2/79.110.251.108:27017,79.172.49.114:27017 at epoch 54db1dc16d40a376233899ee
2016-02-19T01:27:41.190+0500 [migrateThread] Assertion: 10334:BSONObj size: 0 (0x0) is invalid. Size must be between 0 and 16793600(16MB) First element: EOO
2016-02-19T01:27:41.194+0500 [migrateThread] grid_fs.fs.chunks 0x121d451 0x11bcbb9 0x11a1916 0x11a1e7c 0x7763d3 0xab06bc 0xd8f7df 0xa5844e 0x1004d82 0x1009ef8 0xff6b19 0x1262219 0x7f2345c2eb50 0x7f2344fd1a7d
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x121d451]
 /usr/bin/mongod(_ZN5mongo10logContextEPKc+0x159) [0x11bcbb9]
 /usr/bin/mongod(_ZN5mongo11msgassertedEiPKc+0xe6) [0x11a1916]
 /usr/bin/mongod() [0x11a1e7c]
 /usr/bin/mongod(_ZNK5mongo7BSONObj14_assertInvalidEv+0x473) [0x7763d3]
 /usr/bin/mongod(_ZN5mongo10FetchStage4workEPm+0x4dc) [0xab06bc]
 /usr/bin/mongod(_ZN5mongo12PlanExecutor7getNextEPNS_7BSONObjEPNS_7DiskLocE+0xef) [0xd8f7df]
 /usr/bin/mongod(_ZN5mongo7Helpers11removeRangeERKNS_8KeyRangeEbbPNS0_11RemoveSaverEbb+0x60e) [0xa5844e]
 /usr/bin/mongod(_ZN5mongo13MigrateStatus3_goEv+0x1ef2) [0x1004d82]
 /usr/bin/mongod(_ZN5mongo13MigrateStatus2goEv+0x28) [0x1009ef8]
 /usr/bin/mongod(_ZN5mongo13migrateThreadEv+0x59) [0xff6b19]
 /usr/bin/mongod() [0x1262219]
 /lib/x86_64-linux-gnu/libpthread.so.0(+0x6b50) [0x7f2345c2eb50]
 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f2344fd1a7d]
2016-02-19T01:27:41.194+0500 [migrateThread] scoped connection to unistore-2/79.110.251.108:27017,79.172.49.114:27017 not being returned to the pool

Comment by Metikov Vadim [ 18/Feb/16 ]

I had upgraded all 3 nodes of MongoDB cluster to version 2.6.11.
After that i started initial sync of 3rd node (we called it "russia"). Here are logs:

2016-02-19T01:25:30.208+0500 [rsSync] replSet initial sync pending
2016-02-19T01:25:30.208+0500 [rsSync] replSet syncing to: 79.110.251.109:27017
2016-02-19T01:25:30.210+0500 [rsSync] build index on: local.replset.minvalid properties: { v: 1, key: {_id: 1 }, name: "_id_", ns: "local.replset.minvalid" }
2016-02-19T01:25:30.210+0500 [rsSync]    added index to empty collection
2016-02-19T01:25:30.210+0500 [rsSync] replSet initial sync drop all databases
2016-02-19T01:25:30.211+0500 [rsSync] dropAllDatabasesExceptLocal 1
2016-02-19T01:25:30.211+0500 [rsSync] replSet initial sync clone all databases
2016-02-19T01:25:30.233+0500 [rsSync] replSet initial sync cloning db: test
2016-02-19T01:25:30.235+0500 [rsSync] allocating new ns file /var/lib/mongodb/test.ns, filling with zeroes...
2016-02-19T01:25:30.270+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/test.0, filling with zeroes...
2016-02-19T01:25:30.300+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/test.0, size: 64MB,  took 0.029 secs
2016-02-19T01:25:30.301+0500 [rsSync] build index on: test.fs.chunks properties: { v: 1, key: { _id: 1}, name: "_id_", ns: "test.fs.chunks" }
2016-02-19T01:25:30.301+0500 [rsSync]    added index to empty collection
2016-02-19T01:25:30.308+0500 [rsSync] replSet initial sync cloning db: grid_fs_test6
2016-02-19T01:25:30.309+0500 [rsSync] allocating new ns file /var/lib/mongodb/grid_fs_test6.ns, fillingwith zeroes...
2016-02-19T01:25:30.344+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs_test6.0,filling with zeroes...
2016-02-19T01:25:30.356+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs_test6.0,size: 64MB,  took 0.011 secs
2016-02-19T01:25:30.840+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs_test6.1,filling with zeroes...
2016-02-19T01:25:30.848+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs_test6.1,size: 128MB,  took 0.007 secs
2016-02-19T01:25:31.610+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs_test6.2,filling with zeroes...
2016-02-19T01:25:31.616+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs_test6.2,size: 256MB,  took 0.005 secs
2016-02-19T01:25:32.718+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs_test6.3,filling with zeroes...
2016-02-19T01:25:32.740+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs_test6.3,size: 512MB,  took 0.021 secs
2016-02-19T01:25:36.215+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs_test6.4,filling with zeroes...
2016-02-19T01:25:36.216+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs_test6.4,size: 1024MB,  took 0.001 secs
2016-02-19T01:25:41.236+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs_test6.5,filling with zeroes...
2016-02-19T01:25:41.238+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs_test6.5,size: 2047MB,  took 0.002 secs
2016-02-19T01:25:47.275+0500 [rsSync] build index on: grid_fs_test6.fs.chunks properties: { v: 1, key:{ _id: 1 }, name: "_id_", ns: "grid_fs_test6.fs.chunks" }
2016-02-19T01:25:47.275+0500 [rsSync]    building index using bulk method
2016-02-19T01:25:47.302+0500 [rsSync] build index done.  scanned 4789 total records. 0.026 secs
2016-02-19T01:25:47.303+0500 [rsSync] build index on: grid_fs_test6.users properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "grid_fs_test6.users" }
2016-02-19T01:25:47.303+0500 [rsSync]    building index using bulk method
2016-02-19T01:25:47.303+0500 [rsSync] build index done.  scanned 2 total records. 0 secs
2016-02-19T01:25:47.306+0500 [rsSync] build index on: grid_fs_test6.fs.files properties: { v: 1, key: {_id: 1 }, name: "_id_", ns: "grid_fs_test6.fs.files" }
2016-02-19T01:25:47.306+0500 [rsSync]    building index using bulk method
2016-02-19T01:25:47.306+0500 [rsSync] build index done.  scanned 175 total records. 0 secs
2016-02-19T01:25:47.307+0500 [rsSync] build index on: grid_fs_test6.statistics.daily properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "grid_fs_test6.statistics.daily" }
2016-02-19T01:25:47.307+0500 [rsSync]    building index using bulk method
2016-02-19T01:25:47.308+0500 [rsSync] build index done.  scanned 11 total records. 0 secs
2016-02-19T01:25:47.309+0500 [rsSync] build index on: grid_fs_test6.updating_pending_files properties:{ v: 1, key: { _id: 1 }, name: "_id_", ns: "grid_fs_test6.updating_pending_files" }
2016-02-19T01:25:47.309+0500 [rsSync]    added index to empty collection
2016-02-19T01:25:47.310+0500 [rsSync] build index on: grid_fs_test6.zip_collections properties: { v: 1,key: { _id: 1 }, name: "_id_", ns: "grid_fs_test6.zip_collections" }
2016-02-19T01:25:47.310+0500 [rsSync]    added index to empty collection
2016-02-19T01:25:47.310+0500 [rsSync] replSet initial sync cloning db: kombu_default
2016-02-19T01:25:47.311+0500 [rsSync] allocating new ns file /var/lib/mongodb/kombu_default.ns, fillingwith zeroes...
2016-02-19T01:25:47.350+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/kombu_default.0,filling with zeroes...
2016-02-19T01:25:47.356+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/kombu_default.0,size: 64MB,  took 0.005 secs
2016-02-19T01:25:47.913+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/kombu_default.1,filling with zeroes...
2016-02-19T01:25:47.914+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/kombu_default.1,size: 128MB,  took 0 secs
2016-02-19T01:25:48.885+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/kombu_default.2,filling with zeroes...
2016-02-19T01:25:48.887+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/kombu_default.2,size: 256MB,  took 0 secs
2016-02-19T01:25:50.311+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/kombu_default.3,filling with zeroes...
2016-02-19T01:25:50.313+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/kombu_default.3,size: 512MB,  took 0 secs
2016-02-19T01:25:55.193+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/kombu_default.4,filling with zeroes...
2016-02-19T01:25:55.195+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/kombu_default.4,size: 1024MB,  took 0.001 secs
2016-02-19T01:26:04.821+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/kombu_default.5,filling with zeroes...
2016-02-19T01:26:05.386+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/kombu_default.5,size: 2047MB,  took 0.565 secs
2016-02-19T01:26:13.672+0500 [clientcursormon] mem (MB) res:4785 virt:119611
2016-02-19T01:26:13.672+0500 [clientcursormon]  mapped (incl journal view):118858
2016-02-19T01:26:13.672+0500 [clientcursormon]  replication threads:32
2016-02-19T01:26:18.717+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/kombu_default.6,filling with zeroes...
2016-02-19T01:26:18.720+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/kombu_default.6,size: 2047MB,  took 0.002 secs
2016-02-19T01:26:28.417+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/kombu_default.7,filling with zeroes...
2016-02-19T01:26:28.420+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/kombu_default.7,size: 2047MB,  took 0.002 secs
2016-02-19T01:26:33.218+0500 [rsSync] build index on: kombu_default.messages properties: { v: 1, key: {_id: 1 }, name: "_id_", ns: "kombu_default.messages" }
2016-02-19T01:26:33.218+0500 [rsSync]    building index using bulk method
2016-02-19T01:26:36.004+0500 [rsSync]           Index Build: 1106700/4087733    27%
2016-02-19T01:26:39.004+0500 [rsSync]           Index Build: 2265800/4087733    55%
2016-02-19T01:26:47.161+0500 [rsSync]           Index Build: 2438600/4087733    59%
2016-02-19T01:26:50.004+0500 [rsSync]           Index Build: 3603800/4087733    88%
2016-02-19T01:27:00.203+0500 [rsSync] build index done.  scanned 4087733 total records. 26.985 secs
2016-02-19T01:27:00.324+0500 [rsSync] build index on: kombu_default.messages.broadcast properties: { v:
2016-02-19T01:27:00.324+0500 [rsSync]    building index using bulk method
2016-02-19T01:27:00.325+0500 [rsSync] build index done.  scanned 165 total records. 0 secs
2016-02-19T01:27:00.327+0500 [rsSync] build index on: kombu_default.messages.routing properties: { v: 1
2016-02-19T01:27:00.327+0500 [rsSync]    building index using bulk method
2016-02-19T01:27:00.327+0500 [rsSync] build index done.  scanned 50 total records. 0 secs
2016-02-19T01:27:00.327+0500 [rsSync] replSet initial sync cloning db: grid_fs
2016-02-19T01:27:00.328+0500 [rsSync] allocating new ns file /var/lib/mongodb/grid_fs.ns, filling with
2016-02-19T01:27:01.436+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs.0, filling with zeroes...
2016-02-19T01:27:01.442+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs.0, size:64MB,  took 0.005 secs
2016-02-19T01:27:01.867+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs.1, filling with zeroes...
2016-02-19T01:27:01.868+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs.1, size:128MB,  took 0 secs
2016-02-19T01:27:02.764+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs.2, filling with zeroes...
2016-02-19T01:27:02.765+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs.2, size:256MB,  took 0 secs
2016-02-19T01:27:04.035+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs.3, filling with zeroes...
2016-02-19T01:27:04.037+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs.3, size:512MB,  took 0 secs
2016-02-19T01:27:04.604+0500 [journal] old journal file will be removed: /var/lib/mongodb/journal/j._0
2016-02-19T01:27:04.620+0500 [journal] old journal file will be removed: /var/lib/mongodb/journal/j._1
2016-02-19T01:27:07.898+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs.4, filling with zeroes...
2016-02-19T01:27:07.900+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs.4, size:1024MB,  took 0.001 secs
2016-02-19T01:27:13.414+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs.5, filling with zeroes...
2016-02-19T01:27:13.417+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs.5, size:2047MB,  took 0.002 secs
2016-02-19T01:27:23.251+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs.6, filling with zeroes...
2016-02-19T01:27:23.253+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs.6, size:2047MB,  took 0.002 secs
2016-02-19T01:27:33.200+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs.7, filling with zeroes...
2016-02-19T01:27:33.848+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs.7, size:2047MB,  took 0.646 secs
2016-02-19T01:27:38.434+0500 [journal] DR101 latency warning on journal file open 1657ms
2016-02-19T01:27:42.473+0500 [clientcursormon] mem (MB) res:15067 virt:144427
2016-02-19T01:27:42.473+0500 [clientcursormon]  mapped (incl journal view):143328
2016-02-19T01:27:42.473+0500 [clientcursormon]  replication threads:32
2016-02-19T01:27:48.916+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs.8, filling with zeroes...
2016-02-19T01:27:48.918+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs.8, size:2047MB,  took 0.002 secs
2016-02-19T01:28:03.118+0500 [rsSync] clone grid_fs.fs.chunks 169471
2016-02-19T01:28:06.451+0500 [rsSync] 169577 objects cloned so far from collection grid_fs.fs.chunks
2016-02-19T01:28:09.880+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs.9, filling with zeroes...
2016-02-19T01:28:09.883+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs.9, size:2047MB,  took 0.002 secs
2016-02-19T01:28:13.740+0500 [journal] old journal file will be removed: /var/lib/mongodb/journal/j._2
2016-02-19T01:28:13.756+0500 [journal] old journal file will be removed: /var/lib/mongodb/journal/j._3
2016-02-19T01:28:13.756+0500 [journal] old journal file will be removed: /var/lib/mongodb/journal/j._4
2016-02-19T01:28:26.696+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs.10, filling with zeroes...
2016-02-19T01:28:26.699+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs.10, size: 2047MB,  took 0.002 secs
2016-02-19T01:28:43.268+0500 [clientcursormon] mem (MB) res:21555 virt:156709
2016-02-19T01:28:43.269+0500 [clientcursormon]  mapped (incl journal view):155610
2016-02-19T01:28:43.269+0500 [clientcursormon]  replication threads:32
2016-02-19T01:28:47.202+0500 [journal] DR101 latency warning on journal file open 2051ms
2016-02-19T01:28:49.017+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs.11, filling with zeroes...
2016-02-19T01:28:49.172+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs.11, size: 2047MB,  took 0.154 secs
2016-02-19T01:29:03.529+0500 [journal] old journal file will be removed: /var/lib/mongodb/journal/j._5
2016-02-19T01:29:03.530+0500 [journal] old journal file will be removed: /var/lib/mongodb/journal/j._6
2016-02-19T01:29:03.530+0500 [journal] old journal file will be removed: /var/lib/mongodb/journal/j._7
2016-02-19T01:29:03.531+0500 [journal] old journal file will be removed: /var/lib/mongodb/journal/j._8
2016-02-19T01:29:03.883+0500 [rsSync] clone grid_fs.fs.chunks 208383
2016-02-19T01:29:07.245+0500 [rsSync] 209328 objects cloned so far from collection grid_fs.fs.chunks
2016-02-19T01:29:08.789+0500 [FileAllocator] allocating new datafile /var/lib/mongodb/grid_fs.12, filling with zeroes...
2016-02-19T01:29:08.824+0500 [FileAllocator] done allocating datafile /var/lib/mongodb/grid_fs.12, size: 2047MB,  took 0.034 secs
2016-02-19T01:29:12.140+0500 [rsSync] build index on: grid_fs.fs.chunks properties: { v: 1, key: { _id:1 }, name: "_id_", ns: "grid_fs.fs.chunks" }
2016-02-19T01:29:12.140+0500 [rsSync]    building index using bulk method
2016-02-19T01:29:13.458+0500 [rsSync] build index done.  scanned 210734 total records. 1.317 secs
2016-02-19T01:29:13.486+0500 [rsSync] replSet initial sync cloning db: admin
2016-02-19T01:29:13.515+0500 [rsSync] replSet initial sync data copy, starting syncup
2016-02-19T01:29:13.516+0500 [rsSync] build index on: grid_fs.fs.chunks properties: { v: 1, unique: true, key: { files_id: 1, n: 1 }, name: "files_id_1_n_1", ns: "grid_fs.fs.chunks" }
2016-02-19T01:29:13.516+0500 [rsSync]    building index using bulk method
2016-02-19T01:29:15.036+0500 [rsSync] build index done.  scanned 210734 total records. 1.52 secs
2016-02-19T01:29:15.148+0500 [rsSync] oplog sync 1 of 3
2016-02-19T01:29:15.532+0500 [rsBackgroundSync] replSet syncing to: 79.110.251.106:27017
2016-02-19T01:29:15.537+0500 [rsBackgroundSync] replset setting syncSourceFeedback to 79.110.251.106:27017
2016-02-19T01:29:15.541+0500 [rsSync] oplog sync 2 of 3
2016-02-19T01:29:15.541+0500 [rsSync] replSet initial sync building indexes
2016-02-19T01:29:15.541+0500 [rsSync] replSet initial sync cloning indexes for : test
2016-02-19T01:29:15.542+0500 [rsSync] replSet initial sync cloning indexes for : grid_fs_test6
2016-02-19T01:29:15.542+0500 [rsSync] replSet initial sync cloning indexes for : kombu_default
2016-02-19T01:29:15.543+0500 [rsSync] build index on: kombu_default.messages properties: { v: 1, key: {queue: 1, _id: 1 }, name: "queue_1__id_1", ns: "kombu_default.messages", background: true }
2016-02-19T01:29:15.543+0500 [rsSync]    building index using bulk method
2016-02-19T01:29:18.004+0500 [rsSync]           Index Build: 1072500/4087733    26%
2016-02-19T01:29:26.091+0500 [rsSync]           Index Build: 1103800/4087733    27%
2016-02-19T01:29:36.440+0500 [rsSync]           Index Build: 2207600/4087733    54%
2016-02-19T01:29:46.707+0500 [rsSync]           Index Build: 3311300/4087733    81%
2016-02-19T01:29:59.425+0500 [rsSync] build index done.  scanned 4087733 total records. 43.882 secs
2016-02-19T01:29:59.439+0500 [rsSync] build index on: kombu_default.messages.broadcast properties: { v:1, key: { queue: 1 }, name: "queue_1", ns: "kombu_default.messages.broadcast" }
2016-02-19T01:29:59.439+0500 [rsSync]    building index using bulk method
2016-02-19T01:29:59.440+0500 [rsSync] build index done.  scanned 165 total records. 0 secs
2016-02-19T01:29:59.440+0500 [rsSync] build index on: kombu_default.messages.routing properties: { v: 1, key: { queue: 1, exchange: 1 }, name: "queue_1_exchange_1", ns: "kombu_default.messages.routing" }
2016-02-19T01:29:59.440+0500 [rsSync]    building index using bulk method
2016-02-19T01:29:59.441+0500 [rsSync] build index done.  scanned 50 total records. 0 secs
2016-02-19T01:29:59.441+0500 [rsSync] replSet initial sync cloning indexes for : grid_fs
2016-02-19T01:29:59.466+0500 [rsSync] build index on: grid_fs.fs.files properties: { v: 1, key: { _id:1 }, name: "_id_", ns: "grid_fs.fs.files" }
2016-02-19T01:29:59.466+0500 [rsSync]    added index to empty collection
2016-02-19T01:29:59.467+0500 [rsSync] build index on: grid_fs.fs.files properties: { v: 1, key: { filename: 1, uploadDate: -1 }, name: "filename_1_uploadDate_-1", ns: "grid_fs.fs.files" }
2016-02-19T01:29:59.467+0500 [rsSync]    added index to empty collection
2016-02-19T01:29:59.589+0500 [rsSync] replSet initial sync cloning indexes for : admin
2016-02-19T01:29:59.589+0500 [rsSync] oplog sync 3 of 3
2016-02-19T01:29:59.590+0500 [rsSync] replSet initial sync finishing up
2016-02-19T01:29:59.590+0500 [rsSync] replSet set minValid=56c6299e:2
2016-02-19T01:29:59.591+0500 [rsSync] replSet RECOVERING
2016-02-19T01:29:59.591+0500 [rsSync] replSet initial sync done
2016-02-19T01:30:00.591+0500 [rsSync] replSet SECONDARY
2016-02-19T01:30:08.164+0500 [repl writer worker 1] build index on: grid_fs.statistics.daily properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "grid_fs.statistics.daily" }
2016-02-19T01:30:08.164+0500 [repl writer worker 1]      added index to empty collection
2016-02-19T01:30:12.539+0500 [repl writer worker 1] build index on: grid_fs.updating_pending_files properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "grid_fs.updating_pending_files" }
2016-02-19T01:30:12.539+0500 [repl writer worker 1]      added index to empty collection
2016-02-19T01:30:31.442+0500 [clientcursormon] mem (MB) res:25259 virt:164897
2016-02-19T01:30:31.442+0500 [clientcursormon]  mapped (incl journal view):163798
2016-02-19T01:30:31.442+0500 [clientcursormon]  replication threads:32

Comment by Metikov Vadim [ 17/Feb/16 ]

Huge thanks, Ramon.The situation is there is no healthy secondary.
I will upgrade to version 2.6.

Comment by Ramon Fernandez Marina [ 17/Feb/16 ]

metikovvadim, this error message is indicative of data corruption, most often caused by disk errors or faulty memory. You can try to resync from a healthy secondary.

While an upgrade will not address this issue, please note that 2.4 is End-Of-Life, so I'd recommend you consider upgrading to a later version.

Comment by Metikov Vadim [ 17/Feb/16 ]

If i trying to repair database, all datafiles are deleted except 10 or 12 with same error:
Assertion: 10334:BSONObj size: 144MB (...) is invalid.Size must be between 0 and 16793600(16MB)

Comment by Metikov Vadim [ 17/Feb/16 ]

Hello!
May be i need reindex all data or upgrade server version ?

Generated at Thu Feb 08 04:00:56 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.