Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-16583

Balancer Lock Information Missing in sh.status()

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Major - P3
    • Resolution: Duplicate
    • Affects Version/s: 2.8.0-rc2
    • Fix Version/s: None
    • Component/s: Sharding
    • Labels:
      None
    • Operating System:
      ALL
    • Steps To Reproduce:
      Hide
      • At bash prompt:

        mlaunch init --sharded 3 --replicaset
        mongo

        Note: I'm using mlaunch version 1.1.6

      • At the mongo shell:

        for (i=1; i<=1000; i++ ) {x = []; for (j=1; j<=1000; j++) {x.push( { a : i, b : j, c : 1000 * i + j, _id : 1000 * i + j } )}; db.foo.insert(x) }
         
        db.foo.ensureIndex( { a : 1, b : 1 }, { name : "first" } )
        db.foo.ensureIndex( { b : 1 }, { name : "second" } )
        sh.enableSharding("test")
        sh.shardCollection("test.foo", { b : 1 } )

      • Run sh.status() repeatedly until you catch it with the balancer on.

      Expected result: All fields in sh.status() get populated.

      Actual result: Take a look at the line that says "Balancer lock taken at undefined by undefined".

      mongos> sh.status()
      --- Sharding Status ---
        sharding version: {
      	"_id" : 1,
      	"minCompatibleVersion" : 5,
      	"currentVersion" : 6,
      	"clusterId" : ObjectId("5492159f53be077898567039")
      }
        shards:
      	{  "_id" : "shard01",  "host" : "shard01/cross-mb-air.local:27018,cross-mb-air.local:27019,cross-mb-air.local:27020" }
      	{  "_id" : "shard02",  "host" : "shard02/cross-mb-air.local:27021,cross-mb-air.local:27022,cross-mb-air.local:27023" }
      	{  "_id" : "shard03",  "host" : "shard03/cross-mb-air.local:27024,cross-mb-air.local:27025,cross-mb-air.local:27026" }
        balancer:
      	Currently enabled:  yes
      	Currently running:  yes
      		Balancer lock taken at undefined by undefined
      	Collections with active migrations:
      		test.foo started at Wed Dec 17 2014 19:05:28 GMT-0500 (EST)
      	Failed balancer rounds in last 5 attempts:  0
      	Migration Results for the last 24 hours:
      		3 : Success
      		2 : Failed with error 'chunk too big to move', from shard01 to shard03
      		1 : Failed with error 'chunk too big to move', from shard01 to shard02
        databases:
      	{  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
      	{  "_id" : "test",  "partitioned" : true,  "primary" : "shard01" }
      		test.foo
      			shard key: { "b" : 1 }
      			chunks:
      				shard01	4
      				shard02	2
      				shard03	1
      			{ "b" : { "$minKey" : 1 } } -->> { "b" : 1 } on : shard02 Timestamp(2, 0)
      			{ "b" : 1 } -->> { "b" : 150 } on : shard03 Timestamp(3, 0)
      			{ "b" : 150 } -->> { "b" : 300 } on : shard02 Timestamp(4, 0)
      			{ "b" : 300 } -->> { "b" : 450 } on : shard01 Timestamp(4, 2)
      			{ "b" : 450 } -->> { "b" : 600 } on : shard01 Timestamp(4, 3)
      			{ "b" : 600 } -->> { "b" : 899 } on : shard01 Timestamp(1, 2)
      			{ "b" : 899 } -->> { "b" : { "$maxKey" : 1 } } on : shard01 Timestamp(1, 3)
       
      mongos>

      Show
      At bash prompt: mlaunch init --sharded 3 --replicaset mongo Note: I'm using mlaunch version 1.1.6 At the mongo shell: for (i=1; i<=1000; i++ ) {x = []; for (j=1; j<=1000; j++) {x.push( { a : i, b : j, c : 1000 * i + j, _id : 1000 * i + j } )}; db.foo.insert(x) }   db.foo.ensureIndex( { a : 1, b : 1 }, { name : "first" } ) db.foo.ensureIndex( { b : 1 }, { name : "second" } ) sh.enableSharding("test") sh.shardCollection("test.foo", { b : 1 } ) Run sh.status() repeatedly until you catch it with the balancer on. Expected result: All fields in sh.status() get populated. Actual result: Take a look at the line that says "Balancer lock taken at undefined by undefined". mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5492159f53be077898567039") } shards: { "_id" : "shard01", "host" : "shard01/cross-mb-air.local:27018,cross-mb-air.local:27019,cross-mb-air.local:27020" } { "_id" : "shard02", "host" : "shard02/cross-mb-air.local:27021,cross-mb-air.local:27022,cross-mb-air.local:27023" } { "_id" : "shard03", "host" : "shard03/cross-mb-air.local:27024,cross-mb-air.local:27025,cross-mb-air.local:27026" } balancer: Currently enabled: yes Currently running: yes Balancer lock taken at undefined by undefined Collections with active migrations: test.foo started at Wed Dec 17 2014 19:05:28 GMT-0500 (EST) Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: 3 : Success 2 : Failed with error 'chunk too big to move', from shard01 to shard03 1 : Failed with error 'chunk too big to move', from shard01 to shard02 databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : true, "primary" : "shard01" } test.foo shard key: { "b" : 1 } chunks: shard01 4 shard02 2 shard03 1 { "b" : { "$minKey" : 1 } } -->> { "b" : 1 } on : shard02 Timestamp(2, 0) { "b" : 1 } -->> { "b" : 150 } on : shard03 Timestamp(3, 0) { "b" : 150 } -->> { "b" : 300 } on : shard02 Timestamp(4, 0) { "b" : 300 } -->> { "b" : 450 } on : shard01 Timestamp(4, 2) { "b" : 450 } -->> { "b" : 600 } on : shard01 Timestamp(4, 3) { "b" : 600 } -->> { "b" : 899 } on : shard01 Timestamp(1, 2) { "b" : 899 } -->> { "b" : { "$maxKey" : 1 } } on : shard01 Timestamp(1, 3)   mongos>

      Description

      In sh.status(), when the balancer lock is in place, I see "Balancer lock taken at undefined by undefined." I would expect that either we do not attempt to provide information to the user, or else that the field would be populated by information.

        Attachments

          Issue Links

            Activity

              People

              Assignee:
              Unassigned
              Reporter:
              william.cross William Cross
              Participants:
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Dates

                Created:
                Updated:
                Resolved: