[SERVER-10039] Sorting on chunks not applied in differ Created: 26/Jun/13  Updated: 11/Jul/16  Resolved: 01/Jul/13

Status: Closed
Project: Core Server
Component/s: Sharding
Affects Version/s: None
Fix Version/s: 2.2.6, 2.4.6, 2.5.1

Type: Bug Priority: Critical - P2
Reporter: Matt Kangas Assignee: Greg Studer
Resolution: Done Votes: 0
Labels: buildbot
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

buildbot: Linux 64-bit DEBUG


Issue Links:
Depends
Related
is related to SERVER-9231 sharding/gridfs.js failing Closed
Operating System: ALL
Participants:

 Description   

Is this related to SERVER-9231 or a different failure?

http://buildlogs.mongodb.org/Linux%2064-bit%20DEBUG/builds/2251/test/sharding/gridfs.js

assert: [817] != [820] are not equal : undefined
Error: Printing Stack Trace
    at printStackTrace (src/mongo/shell/utils.js:37:15)
    at doassert (src/mongo/shell/assert.js:6:5)
    at Function.assert.eq (src/mongo/shell/assert.js:32:5)
    at testGridFS (/data/buildslaves/Linux_64bit_DEBUG/mongo/jstests/sharding/gridfs.js:41:12)
    at /data/buildslaves/Linux_64bit_DEBUG/mongo/jstests/sharding/gridfs.js:63:1
Wed Jun 26 20:55:36.883 [817] != [820] are not equal : undefined at src/mongo/shell/assert.js:7
failed to load: /data/buildslaves/Linux_64bit_DEBUG/mongo/jstests/sharding/gridfs.js

http://buildbot.10gen.cc/builders/Linux%2064-bit%20DEBUG/builds/2251/steps/test_12/logs/stdio

Visible in the latest Linux 64-bit DEBUG build.



 Comments   
Comment by auto [ 12/Jul/13 ]

Author:

{u'username': u'gregstuder', u'name': u'Greg Studer', u'email': u'greg@10gen.com'}

Message: SERVER-10039 buildbot test ChunkManagerLoadBasicTest need index to support sorting chunks in test
Branch: v2.4
https://github.com/mongodb/mongo/commit/35f6d48c1f3117f2191407dcfb1121da280c59c1

Comment by auto [ 12/Jul/13 ]

Author:

{u'username': u'gregstuder', u'name': u'Greg Studer', u'email': u'greg@10gen.com'}

Message: SERVER-10039 sorting ignored in returned query obj for differ
Branch: v2.4
https://github.com/mongodb/mongo/commit/d0413be9bb9c809c5e72a0093bcbd2428b9d503a

Comment by auto [ 01/Jul/13 ]

Author:

{u'username': u'gregstuder', u'name': u'Greg Studer', u'email': u'greg@10gen.com'}

Message: SERVER-10039 buildbot test ChunkManagerLoadBasicTest need index to support sorting chunks in test
Branch: v2.2
https://github.com/mongodb/mongo/commit/4bf8648b4196aff618c68ca2d814a1a13f48c3d2

Comment by auto [ 01/Jul/13 ]

Author:

{u'username': u'gregstuder', u'name': u'Greg Studer', u'email': u'greg@10gen.com'}

Message: SERVER-10039 buildbot test ChunkManagerLoadBasicTest need index to support sorting chunks in test
Branch: master
https://github.com/mongodb/mongo/commit/01f1dc100623352300be761829b0e19be587de30

Comment by auto [ 01/Jul/13 ]

Author:

{u'username': u'gregstuder', u'name': u'Greg Studer', u'email': u'greg@10gen.com'}

Message: SERVER-10039 sorting ignored in returned query obj for differ
Branch: v2.2
https://github.com/mongodb/mongo/commit/c83318821c4497026731511c391a6f6686d24be3

Comment by auto [ 01/Jul/13 ]

Author:

{u'username': u'gregstuder', u'name': u'Greg Studer', u'email': u'greg@10gen.com'}

Message: SERVER-10039 sorting ignored in returned query obj for differ
Branch: master
https://github.com/mongodb/mongo/commit/59878c039978b28f0399ad20e349ae7ab9ddb1b8

Comment by Greg Studer [ 27/Jun/13 ]

looks like a different failure, can't reproduce yet

key section is:

Mongos loading new chunk manager

 m30999| Wed Jun 26 20:55:35.298 [conn1] loading chunk manager for collection sharded_files_id_n.fs.chunks using old chunk manager w/ version 9|1||51cb55212f05a0ad9d54475d and 271 chunks
 m30999| Wed Jun 26 20:55:35.299 [conn1] major version query from 9|1||51cb55212f05a0ad9d54475d and over 3 shards is { ns: "sharded_files_id_n.fs.chunks" }

Migration finishes

 m30002| Wed Jun 26 20:55:35.306 [conn8] Waiting for commit to finish
 m30002| Wed Jun 26 20:55:35.306 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('51cb552383f476e4831214fc'), n: 3 } -> { files_id: ObjectId('51cb552383f476e4831214fc'), n: 6 }
 m30002| Wed Jun 26 20:55:35.306 [migrateThread] migrate commit flushed to journal for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('51cb552383f476e4831214fc'), n: 3 } -> { files_id: ObjectId('51cb552383f476e4831214fc'), n: 6 }
 m30002| Wed Jun 26 20:55:35.306 [migrateThread] about to log metadata event: { _id: "bs-durd-linux64-2013-06-26T20:55:35-51cb55477bb2cade670a2fa4", server: "bs-durd-linux64", clientAddr: ":27017", time: new Date(1372280135306), what: "moveChunk.to", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 3 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 6 }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 22, step4 of 5: 0, step5 of 5: 68 } }
 m30002| Wed Jun 26 20:55:35.307 [migrateThread] thread migrateThread stack usage was 16072 bytes,  which is the most so far
 m30001| Wed Jun 26 20:55:35.314 [conn9] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30001", min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 3 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 6 }, shardKeyPattern: { files_id: 1.0, n: 1.0 }, state: "done", counts: { cloned: 3, clonedBytes: 786618, catchup: 0, steady: 0 }, ok: 1.0 }
 m30001| Wed Jun 26 20:55:35.314 [conn9] moveChunk updating self version to: 10|1||51cb55212f05a0ad9d54475d through { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 } -> { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 } for collection 'sharded_files_id_n.fs.chunks'
 m30001| Wed Jun 26 20:55:35.316 [conn9] about to log metadata event: { _id: "bs-durd-linux64-2013-06-26T20:55:35-51cb5547929fa0afe689ff32", server: "bs-durd-linux64", clientAddr: "127.0.0.1:41239", time: new Date(1372280135316), what: "moveChunk.commit", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 3 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 6 }, from: "shard0001", to: "shard0002" } }
 m30001| Wed Jun 26 20:55:35.316 [conn9] MigrateFromStatus::done About to acquire global write lock to exit critical section
 m30001| Wed Jun 26 20:55:35.317 [conn9] MigrateFromStatus::done Global lock acquired
 m30001| Wed Jun 26 20:55:35.317 [conn9] forking for cleanup of chunk data
 m30001| Wed Jun 26 20:55:35.317 [conn9] MigrateFromStatus::done About to acquire global write lock to exit critical section
 m30001| Wed Jun 26 20:55:35.317 [conn9] MigrateFromStatus::done Global lock acquired
 m30001| Wed Jun 26 20:55:35.317 [RangeDeleter] Deleter starting delete for: sharded_files_id_n.fs.chunks from { files_id: ObjectId('51cb552383f476e4831214fc'), n: 3 } -> { files_id: ObjectId('51cb552383f476e4831214fc'), n: 6 }, with opId: 1793
 m30001| Wed Jun 26 20:55:35.317 [RangeDeleter] bad: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 3 } { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 } -1 -92
 m30001| Wed Jun 26 20:55:35.317 [RangeDeleter] 	{ files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }	{ files_id: MaxKey, n: MaxKey }	
 m30001| Wed Jun 26 20:55:35.318 [RangeDeleter] bad: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 4 } { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 } -1 -92
 m30001| Wed Jun 26 20:55:35.318 [RangeDeleter] 	{ files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }	{ files_id: MaxKey, n: MaxKey }	
 m30001| Wed Jun 26 20:55:35.318 [RangeDeleter] bad: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 5 } { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 } -1 -92
 m30001| Wed Jun 26 20:55:35.318 [RangeDeleter] 	{ files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }	{ files_id: MaxKey, n: MaxKey }	
 m30001| Wed Jun 26 20:55:35.318 [RangeDeleter] rangeDeleter deleted 3 documents for sharded_files_id_n.fs.chunks from { files_id: ObjectId('51cb552383f476e4831214fc'), n: 3 } -> { files_id: ObjectId('51cb552383f476e4831214fc'), n: 6 }
 m30001| Wed Jun 26 20:55:35.319 [conn9] distributed lock 'sharded_files_id_n.fs.chunks/bs-durd-linux64:30001:1372280101:477408165' unlocked. 
 m30001| Wed Jun 26 20:55:35.319 [conn9] about to log metadata event: { _id: "bs-durd-linux64-2013-06-26T20:55:35-51cb5547929fa0afe689ff33", server: "bs-durd-linux64", clientAddr: "127.0.0.1:41239", time: new Date(1372280135319), what: "moveChunk.from", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 3 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 6 }, step1 of 6: 5, step2 of 6: 26, step3 of 6: 1, step4 of 6: 72, step5 of 6: 30, step6 of 6: 0 } }
 m30001| Wed Jun 26 20:55:35.319 [conn9] command admin.$cmd command: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 3 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 6 }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('51cb552383f476e4831214fc')n_3", configdb: "localhost:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:44 r:227 w:311 reslen:37 139ms
 m30999| Wed Jun 26 20:55:35.320 [Balancer] moveChunk result: { ok: 1.0 }

Load of chunk manager finishes, apparently with newly migrated chunk

 m30999| Wed Jun 26 20:55:35.375 [conn1] loaded 271 chunks into new chunk manager for sharded_files_id_n.fs.chunks with version 10|1||51cb55212f05a0ad9d54475d
 m30999| Wed Jun 26 20:55:35.375 [conn1] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 80ms sequenceNumber: 287 version: 10|1||51cb55212f05a0ad9d54475d based on: 9|1||51cb55212f05a0ad9d54475d

Do a new query to the collection

 m30999| Wed Jun 26 20:55:35.383 [Balancer] loading chunk manager for collection sharded_files_id_n.fs.chunks using old chunk manager w/ version 9|1||51cb55212f05a0ad9d54475d and 271 chunks
 m30999| Wed Jun 26 20:55:35.384 [Balancer] major version query from 9|1||51cb55212f05a0ad9d54475d and over 3 shards is { ns: "sharded_files_id_n.fs.chunks", $or: [ { lastmod: { $gte: Timestamp 9000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 9000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 9000|1 } }, { shard: "shard0002", lastmod: { $gt: Timestamp 4000|1 } } ] }
 m30999| Wed Jun 26 20:55:35.385 [conn1] [pcursor] retrying (2 tries) pcursor over QSpec { ns: "sharded_files_id_n.$cmd", n2skip: 0, n2return: 1, options: 0, query: { filemd5: ObjectId('51cb552383f476e4831214fc'), partialOk: true, startAt: 820, md5state: BinData }, fields: {} } and CInfo { v_ns: "sharded_files_id_n.fs.chunks", filter: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 820 } }
 m30999| Wed Jun 26 20:55:35.386 [conn1] [pcursor] closing cursor on shard shard0001:localhost:30001 as the connection is no longer required by [sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d]
 m30999| Wed Jun 26 20:55:35.386 [conn1] [pcursor] initializing over 1 shards required by [sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d]
 m30999| Wed Jun 26 20:55:35.386 [conn1] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
 m30999| Wed Jun 26 20:55:35.386 [conn1]  have to set shard version for conn: localhost:30001 ns:sharded_files_id_n.fs.chunks my last seq: 286  current: 287 version: 10|1||51cb55212f05a0ad9d54475d manager: 0x7f9e97880500
 m30999| Wed Jun 26 20:55:35.386 [conn1]     setShardVersion  shard0001 localhost:30001  sharded_files_id_n.fs.chunks  { setShardVersion: "sharded_files_id_n.fs.chunks", configdb: "localhost:29000", version: Timestamp 10000|1, versionEpoch: ObjectId('51cb55212f05a0ad9d54475d'), serverID: ObjectId('51cb54d22f05a0ad9d54474e'), shard: "shard0001", shardHost: "localhost:30001" } 0x7f9e9786a620 287
 m30999| Wed Jun 26 20:55:35.386 [conn1]       setShardVersion success: { oldVersion: Timestamp 9000|1, oldVersionEpoch: ObjectId('51cb55212f05a0ad9d54475d'), ok: 1.0 }
 m30999| Wed Jun 26 20:55:35.386 [conn1] [pcursor] needed to set remote version on connection to value compatible with [sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d]
 m30999| Wed Jun 26 20:55:35.386 [conn1] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Jun 26 20:55:35.386 [conn1] [pcursor] finishing over 1 shards
 m30999| Wed Jun 26 20:55:35.386 [conn1] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Jun 26 20:55:35.387 [conn1] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d", cursor: { md5state: BinData, numChunks: 820, md5: "8bd272dcb9404bb16859a391ddfdad61", ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
filemd5 output: {
	"md5state" : BinData(0,"ML1lZgAAAADjQAAj23oeWhUtIW3r0FbhcFQ4TW5lbW9uaWNFdgBfWk41bW9uZ285UG9vbFN0YXRzRDBFdgAAAOCIG6IWfwAAVJMbAQAAAABAqK+yFn8AAA=="),
	"numChunks" : 820,
	"md5" : "8bd272dcb9404bb16859a391ddfdad61",
	"ok" : 1
}
 m30999| Wed Jun 26 20:55:35.393 [conn1] [pcursor] creating pcursor over QSpec { ns: "sharded_files_id_n.fs.chunks", n2skip: 0, n2return: 0, options: 0, query: { files_id: ObjectId('51cb552383f476e4831214fc') }, fields: {} } and CInfo { v_ns: "", filter: {} }
 m30999| Wed Jun 26 20:55:35.393 [conn1] [pcursor] initializing over 3 shards required by [sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d]
 m30999| Wed Jun 26 20:55:35.393 [conn1] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
 m30999| Wed Jun 26 20:55:35.394 [conn1]  have to set shard version for conn: localhost:30000 ns:sharded_files_id_n.fs.chunks my last seq: 283  current: 287 version: 9|0||51cb55212f05a0ad9d54475d manager: 0x7f9e97880500
 m30999| Wed Jun 26 20:55:35.394 [conn1]     setShardVersion  shard0000 localhost:30000  sharded_files_id_n.fs.chunks  { setShardVersion: "sharded_files_id_n.fs.chunks", configdb: "localhost:29000", version: Timestamp 9000|0, versionEpoch: ObjectId('51cb55212f05a0ad9d54475d'), serverID: ObjectId('51cb54d22f05a0ad9d54474e'), shard: "shard0000", shardHost: "localhost:30000" } 0x7f9e9786a700 287
 m30999| Wed Jun 26 20:55:35.404 [Balancer] loaded 2 chunks into new chunk manager for sharded_files_id_n.fs.chunks with version 10|1||51cb55212f05a0ad9d54475d
 m30999| Wed Jun 26 20:55:35.404 [Balancer] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 24ms sequenceNumber: 288 version: 10|1||51cb55212f05a0ad9d54475d based on: 9|1||51cb55212f05a0ad9d54475d
 m30999| Wed Jun 26 20:55:35.409 [Balancer] *** end of balancing round
 m30999| Wed Jun 26 20:55:35.416 [Balancer] distributed lock 'balancer/bs-durd-linux64:30999:1372280018:1804289383' unlocked. 
 m30999| Wed Jun 26 20:55:35.429 [conn1]       setShardVersion success: { oldVersion: Timestamp 8000|0, oldVersionEpoch: ObjectId('51cb55212f05a0ad9d54475d'), ok: 1.0 }
 m30999| Wed Jun 26 20:55:35.429 [conn1] [pcursor] needed to set remote version on connection to value compatible with [sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d]
 m30999| Wed Jun 26 20:55:35.429 [conn1] [pcursor] initialized query (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Jun 26 20:55:35.429 [conn1] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
 m30999| Wed Jun 26 20:55:35.430 [conn1] [pcursor] initialized query (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }

Need to update version on to-shard from migration

 m30999| Wed Jun 26 20:55:35.430 [conn1] [pcursor] initializing on shard shard0002:localhost:30002, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
 m30999| Wed Jun 26 20:55:35.430 [conn1]  have to set shard version for conn: localhost:30002 ns:sharded_files_id_n.fs.chunks my last seq: 283  current: 287 version: 4|1||51cb55212f05a0ad9d54475d manager: 0x7f9e97880500

Send SSV with old version?

 m30999| Wed Jun 26 20:55:35.430 [conn1]     setShardVersion  shard0002 localhost:30002  sharded_files_id_n.fs.chunks  { setShardVersion: "sharded_files_id_n.fs.chunks", configdb: "localhost:29000", version: Timestamp 4000|1, versionEpoch: ObjectId('51cb55212f05a0ad9d54475d'), serverID: ObjectId('51cb54d22f05a0ad9d54474e'), shard: "shard0002", shardHost: "localhost:30002" } 0x7f9e9786ab60 287
 m30999| Wed Jun 26 20:55:35.432 [conn1]       setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('51cb55212f05a0ad9d54475d'), ok: 1.0 }

... but we were using the new chunk manager?

 m30999| Wed Jun 26 20:55:35.432 [conn1] [pcursor] needed to set remote version on connection to value compatible with [sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d]
 m30999| Wed Jun 26 20:55:35.432 [conn1] [pcursor] initialized query (lazily) on shard shard0002:localhost:30002, current connection state is { state: { conn: "localhost:30002", vinfo: "sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Jun 26 20:55:35.432 [conn1] [pcursor] finishing over 3 shards
 m30999| Wed Jun 26 20:55:35.432 [conn1] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30002| Wed Jun 26 20:55:35.434 [conn3] bad: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 3 } { files_id: ObjectId('51cb552383f476e4831214fc'), n: 6 } -1 -1
 m30002| Wed Jun 26 20:55:35.434 [conn3] 	{ files_id: ObjectId('51cb552383f476e4831214fc'), n: 6 }	{ files_id: ObjectId('51cb552383f476e4831214fc'), n: 213 }	
 m30002| Wed Jun 26 20:55:35.434 [conn3] bad: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 4 } { files_id: ObjectId('51cb552383f476e4831214fc'), n: 6 } -1 -1
 m30002| Wed Jun 26 20:55:35.434 [conn3] 	{ files_id: ObjectId('51cb552383f476e4831214fc'), n: 6 }	{ files_id: ObjectId('51cb552383f476e4831214fc'), n: 213 }	
 m30002| Wed Jun 26 20:55:35.434 [conn3] bad: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 5 } { files_id: ObjectId('51cb552383f476e4831214fc'), n: 6 } -1 -1
 m30002| Wed Jun 26 20:55:35.434 [conn3] 	{ files_id: ObjectId('51cb552383f476e4831214fc'), n: 6 }	{ files_id: ObjectId('51cb552383f476e4831214fc'), n: 213 }	
 m30999| Wed Jun 26 20:55:35.433 [conn1] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d", cursor: { _id: ObjectId('51cb55239fdb20d736e8a0ab'), files_id: ObjectId('51cb552383f476e4831214fc'), n: 0, data: BinData }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
 m30999| Wed Jun 26 20:55:35.433 [conn1] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Jun 26 20:55:35.436 [conn1] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d", cursor: { _id: ObjectId('51cb55329fdb20d736e8a209'), files_id: ObjectId('51cb552383f476e4831214fc'), n: 417, data: BinData }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
 m30999| Wed Jun 26 20:55:35.436 [conn1] [pcursor] finishing on shard shard0002:localhost:30002, current connection state is { state: { conn: "localhost:30002", vinfo: "sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Jun 26 20:55:35.437 [conn1] [pcursor] finished on shard shard0002:localhost:30002, current connection state is { state: { conn: "(done)", vinfo: "sharded_files_id_n.fs.chunks @ 10|1||51cb55212f05a0ad9d54475d", cursor: { _id: ObjectId('51cb5525929fa0afe689fc87'), files_id: ObjectId('51cb552383f476e4831214fc'), n: 6, data: BinData }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
 m30999| Wed Jun 26 20:55:36.427 [Balancer] Refreshing MaxChunkSize: 1
 m30999| Wed Jun 26 20:55:36.428 [Balancer] trying to acquire new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-durd-linux64:30999:1372280018:1804289383 )
 m30999| Wed Jun 26 20:55:36.428 [Balancer] polling for status of connection to 127.0.0.1:29000, no events
 m30999| Wed Jun 26 20:55:36.429 [Balancer] about to acquire distributed lock 'balancer/bs-durd-linux64:30999:1372280018:1804289383:
 m30999| { "state" : 1,
 m30999|   "who" : "bs-durd-linux64:30999:1372280018:1804289383:Balancer:1681692777",
 m30999|   "process" : "bs-durd-linux64:30999:1372280018:1804289383",
 m30999|   "when" : { "$date" : "Wed Jun 26 20:55:36 2013" },
 m30999|   "why" : "doing balance round",
 m30999|   "ts" : { "$oid" : "51cb55482f05a0ad9d544768" } }
 m30999| { "_id" : "balancer",
 m30999|   "state" : 0,
 m30999|   "ts" : { "$oid" : "51cb55472f05a0ad9d544767" } }
 m30999| Wed Jun 26 20:55:36.431 [Balancer] distributed lock 'balancer/bs-durd-linux64:30999:1372280018:1804289383' acquired, ts : 51cb55482f05a0ad9d544768
 m30999| Wed Jun 26 20:55:36.431 [Balancer] *** start balancing round
 m30999| Wed Jun 26 20:55:36.431 [Balancer] waitForDelete: 0
 m30999| Wed Jun 26 20:55:36.431 [Balancer] secondaryThrottle: 1
 m30999| Wed Jun 26 20:55:36.459 [Balancer] shard0001 has more chunks me:0 best: shard0000:0
 m30999| Wed Jun 26 20:55:36.459 [Balancer] shard0002 has more chunks me:1 best: shard0000:0
 m30999| Wed Jun 26 20:55:36.459 [Balancer] collection : sharded_files_id.fs.chunks
 m30999| Wed Jun 26 20:55:36.459 [Balancer] donor      : shard0002 chunks on 1
 m30999| Wed Jun 26 20:55:36.459 [Balancer] receiver   : shard0000 chunks on 0
 m30999| Wed Jun 26 20:55:36.459 [Balancer] threshold  : 2
 m30999| Wed Jun 26 20:55:36.474 [Balancer] shard0001 has more chunks me:131 best: shard0000:70
 m30999| Wed Jun 26 20:55:36.475 [Balancer] shard0002 has more chunks me:70 best: shard0000:70
 m30999| Wed Jun 26 20:55:36.475 [Balancer] collection : sharded_files_id_n.fs.chunks
 m30999| Wed Jun 26 20:55:36.475 [Balancer] donor      : shard0001 chunks on 131
 m30999| Wed Jun 26 20:55:36.475 [Balancer] receiver   : shard0000 chunks on 70
 m30999| Wed Jun 26 20:55:36.475 [Balancer] threshold  : 2
 m30999| Wed Jun 26 20:55:36.475 [Balancer]  ns: sharded_files_id_n.fs.chunks going to move { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('51cb552383f476e4831214fc')n_417", lastmod: Timestamp 10000|1, lastmodEpoch: ObjectId('51cb55212f05a0ad9d54475d'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }, shard: "shard0001" } from: shard0001 to: shard0000 tag []
 m30999| Wed Jun 26 20:55:36.475 [Balancer] moving chunk ns: sharded_files_id_n.fs.chunks moving ( ns: sharded_files_id_n.fs.chunks, shard: shard0001:localhost:30001, lastmod: 10|1||000000000000000000000000, min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }) shard0001:localhost:30001 -> shard0000:localhost:30000
 m30001| Wed Jun 26 20:55:36.475 [conn9] warning: secondaryThrottle selected but no replication
 m30001| Wed Jun 26 20:55:36.476 [conn9] received moveChunk request: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('51cb552383f476e4831214fc')n_417", configdb: "localhost:29000", secondaryThrottle: true, waitForDelete: false }
 m30001| Wed Jun 26 20:55:36.478 [conn9] distributed lock 'sharded_files_id_n.fs.chunks/bs-durd-linux64:30001:1372280101:477408165' acquired, ts : 51cb5548929fa0afe689ff34
 m30001| Wed Jun 26 20:55:36.478 [conn9] about to log metadata event: { _id: "bs-durd-linux64-2013-06-26T20:55:36-51cb5548929fa0afe689ff35", server: "bs-durd-linux64", clientAddr: "127.0.0.1:41239", time: new Date(1372280136478), what: "moveChunk.start", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }, from: "shard0001", to: "shard0000" } }
 m30001| Wed Jun 26 20:55:36.551 [conn9] moveChunk request accepted at version 10|1||51cb55212f05a0ad9d54475d
 m30001| Wed Jun 26 20:55:36.552 [conn9] moveChunk number of documents: 3
 m30000| Wed Jun 26 20:55:36.553 [migrateThread] starting receiving-end of migration of chunk { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 } -> { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 } for collection sharded_files_id_n.fs.chunks from localhost:30001
 m30001| Wed Jun 26 20:55:36.556 [conn9] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30001", min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }, shardKeyPattern: { files_id: 1.0, n: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m30000| Wed Jun 26 20:55:36.560 [migrateThread] Waiting for replication to catch up before entering critical section
 m30000| Wed Jun 26 20:55:36.560 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 } -> { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }
 m30001| Wed Jun 26 20:55:36.576 [conn9] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30001", min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }, shardKeyPattern: { files_id: 1.0, n: 1.0 }, state: "catchup", counts: { cloned: 3, clonedBytes: 786618, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m30001| Wed Jun 26 20:55:36.593 [conn9] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30001", min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }, shardKeyPattern: { files_id: 1.0, n: 1.0 }, state: "catchup", counts: { cloned: 3, clonedBytes: 786618, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m30001| Wed Jun 26 20:55:36.605 [conn9] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30001", min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }, shardKeyPattern: { files_id: 1.0, n: 1.0 }, state: "catchup", counts: { cloned: 3, clonedBytes: 786618, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m30000| Wed Jun 26 20:55:36.609 [migrateThread] migrate commit flushed to journal for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 } -> { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }
 m30001| Wed Jun 26 20:55:36.625 [conn9] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30001", min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }, shardKeyPattern: { files_id: 1.0, n: 1.0 }, state: "steady", counts: { cloned: 3, clonedBytes: 786618, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m30001| Wed Jun 26 20:55:36.625 [conn9] About to check if it is safe to enter critical section
 m30001| Wed Jun 26 20:55:36.626 [conn9] About to enter migrate critical section
 m30001| Wed Jun 26 20:55:36.627 [conn9] moveChunk setting version to: 11|0||51cb55212f05a0ad9d54475d
 m30000| Wed Jun 26 20:55:36.628 [conn10] Waiting for commit to finish
 m30000| Wed Jun 26 20:55:36.635 [conn10] Waiting for commit to finish
 m30000| Wed Jun 26 20:55:36.645 [conn10] Waiting for commit to finish
 m30000| Wed Jun 26 20:55:36.645 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 } -> { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }
 m30000| Wed Jun 26 20:55:36.645 [migrateThread] migrate commit flushed to journal for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 } -> { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }
 m30000| Wed Jun 26 20:55:36.645 [migrateThread] about to log metadata event: { _id: "bs-durd-linux64-2013-06-26T20:55:36-51cb55489fdb20d736e8a21a", server: "bs-durd-linux64", clientAddr: ":27017", time: new Date(1372280136645), what: "moveChunk.to", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 85 } }
 m30001| Wed Jun 26 20:55:36.655 [conn9] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30001", min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }, shardKeyPattern: { files_id: 1.0, n: 1.0 }, state: "done", counts: { cloned: 3, clonedBytes: 786618, catchup: 0, steady: 0 }, ok: 1.0 }
 m30001| Wed Jun 26 20:55:36.655 [conn9] moveChunk updating self version to: 11|1||51cb55212f05a0ad9d54475d through { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 } -> { files_id: ObjectId('51cb552383f476e4831214fc'), n: 423 } for collection 'sharded_files_id_n.fs.chunks'
 m30001| Wed Jun 26 20:55:36.658 [conn9] about to log metadata event: { _id: "bs-durd-linux64-2013-06-26T20:55:36-51cb5548929fa0afe689ff36", server: "bs-durd-linux64", clientAddr: "127.0.0.1:41239", time: new Date(1372280136658), what: "moveChunk.commit", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }, from: "shard0001", to: "shard0000" } }
 m30001| Wed Jun 26 20:55:36.658 [conn9] MigrateFromStatus::done About to acquire global write lock to exit critical section
 m30001| Wed Jun 26 20:55:36.658 [conn9] MigrateFromStatus::done Global lock acquired
 m30001| Wed Jun 26 20:55:36.658 [conn9] forking for cleanup of chunk data
 m30001| Wed Jun 26 20:55:36.658 [conn9] MigrateFromStatus::done About to acquire global write lock to exit critical section
 m30001| Wed Jun 26 20:55:36.658 [conn9] MigrateFromStatus::done Global lock acquired
 m30001| Wed Jun 26 20:55:36.659 [conn9] distributed lock 'sharded_files_id_n.fs.chunks/bs-durd-linux64:30001:1372280101:477408165' unlocked. 
 m30001| Wed Jun 26 20:55:36.660 [conn9] about to log metadata event: { _id: "bs-durd-linux64-2013-06-26T20:55:36-51cb5548929fa0afe689ff37", server: "bs-durd-linux64", clientAddr: "127.0.0.1:41239", time: new Date(1372280136660), what: "moveChunk.from", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }, step1 of 6: 0, step2 of 6: 75, step3 of 6: 0, step4 of 6: 73, step5 of 6: 32, step6 of 6: 0 } }
 m30001| Wed Jun 26 20:55:36.660 [conn9] command admin.$cmd command: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 417 }, max: { files_id: ObjectId('51cb552383f476e4831214fc'), n: 420 }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('51cb552383f476e4831214fc')n_417", configdb: "localhost:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:39 r:264 w:329 reslen:37 184ms
 m30999| Wed Jun 26 20:55:36.660 [Balancer] moveChunk result: { ok: 1.0 }
 m30999| Wed Jun 26 20:55:36.665 [Balancer] loading chunk manager for collection sharded_files_id_n.fs.chunks using old chunk manager w/ version 10|1||51cb55212f05a0ad9d54475d and 271 chunks
 m30999| Wed Jun 26 20:55:36.666 [Balancer] major version query from 10|1||51cb55212f05a0ad9d54475d and over 3 shards is { ns: "sharded_files_id_n.fs.chunks" }
 m30999| Wed Jun 26 20:55:36.726 [Balancer] loaded 271 chunks into new chunk manager for sharded_files_id_n.fs.chunks with version 11|1||51cb55212f05a0ad9d54475d
 m30999| Wed Jun 26 20:55:36.726 [Balancer] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 65ms sequenceNumber: 289 version: 11|1||51cb55212f05a0ad9d54475d based on: 10|1||51cb55212f05a0ad9d54475d
 m30999| Wed Jun 26 20:55:36.730 [Balancer] *** end of balancing round
 m30999| Wed Jun 26 20:55:36.731 [Balancer] distributed lock 'balancer/bs-durd-linux64:30999:1372280018:1804289383' unlocked. 
assert: [817] != [820] are not equal : undefined
Error: Printing Stack Trace
    at printStackTrace (src/mongo/shell/utils.js:37:15)
    at doassert (src/mongo/shell/assert.js:6:5)
    at Function.assert.eq (src/mongo/shell/assert.js:32:5)
    at testGridFS (/data/buildslaves/Linux_64bit_DEBUG/mongo/jstests/sharding/gridfs.js:41:12)
    at /data/buildslaves/Linux_64bit_DEBUG/mongo/jstests/sharding/gridfs.js:63:1
Wed Jun 26 20:55:36.883 [817] != [820] are not equal : undefined at src/mongo/shell/assert.js:7

Generated at Thu Feb 08 03:22:06 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.