2012-09-24 23:33:36 EDT | Mon Sep 24 23:33:36 [initandlisten] connection accepted from 127.0.0.1:49479 #85 (5 connections now open) |
| MongoDB shell version: 2.3.0-pre- |
| null |
| Resetting db path '/data/db/shard50' |
| Mon Sep 24 23:33:37 shell: started program /Users/yellow/buildslave/OS_X_105_64bit_DUR_OFF/mongo/mongod --port 30000 --dbpath /data/db/shard50 |
| m30000| Mon Sep 24 23:33:37 [initandlisten] MongoDB starting : pid=92462 port=30000 dbpath=/data/db/shard50 64-bit host=bs-mm1.local |
| m30000| Mon Sep 24 23:33:37 [initandlisten] |
| m30000| Mon Sep 24 23:33:37 [initandlisten] ** NOTE: This is a development version (2.3.0-pre-) of MongoDB. |
| m30000| Mon Sep 24 23:33:37 [initandlisten] ** Not recommended for production. |
| m30000| Mon Sep 24 23:33:37 [initandlisten] |
| m30000| Mon Sep 24 23:33:37 [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 266 processes, 10240 files. Number of processes should be at least 5120 : 0.5 times number of files. |
| m30000| Mon Sep 24 23:33:37 [initandlisten] |
| m30000| Mon Sep 24 23:33:37 [initandlisten] db version v2.3.0-pre-, pdfile version 4.5 |
| m30000| Mon Sep 24 23:33:37 [initandlisten] git version: 1930f5bc9170f2c4b061b8b416bb0a414fba5b7c |
| m30000| Mon Sep 24 23:33:37 [initandlisten] build info: Darwin bs-mm1.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_49 |
| m30000| Mon Sep 24 23:33:37 [initandlisten] options: { dbpath: "/data/db/shard50", port: 30000 } |
| m30000| Mon Sep 24 23:33:37 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/data/db/shard50/journal" |
| m30000| Mon Sep 24 23:33:37 [websvr] admin web console waiting for connections on port 31000 |
| m30000| Mon Sep 24 23:33:37 [initandlisten] waiting for connections on port 30000 |
| Resetting db path '/data/db/shard51' |
| Mon Sep 24 23:33:37 shell: started program /Users/yellow/buildslave/OS_X_105_64bit_DUR_OFF/mongo/mongod --port 30001 --dbpath /data/db/shard51 |
| m30000| Mon Sep 24 23:33:37 [initandlisten] connection accepted from 127.0.0.1:49487 #1 (1 connection now open) |
| m30001| Mon Sep 24 23:33:37 [initandlisten] MongoDB starting : pid=92463 port=30001 dbpath=/data/db/shard51 64-bit host=bs-mm1.local |
| m30001| Mon Sep 24 23:33:37 [initandlisten] |
| m30001| Mon Sep 24 23:33:37 [initandlisten] ** NOTE: This is a development version (2.3.0-pre-) of MongoDB. |
| m30001| Mon Sep 24 23:33:37 [initandlisten] ** Not recommended for production. |
| m30001| Mon Sep 24 23:33:37 [initandlisten] |
| m30001| Mon Sep 24 23:33:37 [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 266 processes, 10240 files. Number of processes should be at least 5120 : 0.5 times number of files. |
| m30001| Mon Sep 24 23:33:37 [initandlisten] |
| m30001| Mon Sep 24 23:33:37 [initandlisten] db version v2.3.0-pre-, pdfile version 4.5 |
| m30001| Mon Sep 24 23:33:37 [initandlisten] git version: 1930f5bc9170f2c4b061b8b416bb0a414fba5b7c |
| m30001| Mon Sep 24 23:33:37 [initandlisten] build info: Darwin bs-mm1.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_49 |
| m30001| Mon Sep 24 23:33:37 [initandlisten] options: { dbpath: "/data/db/shard51", port: 30001 } |
| m30001| Mon Sep 24 23:33:37 [initandlisten] Unable to check for journal files due to: boost::filesystem::basic_directory_iterator constructor: No such file or directory: "/data/db/shard51/journal" |
| m30001| Mon Sep 24 23:33:37 [websvr] admin web console waiting for connections on port 31001 |
| m30001| Mon Sep 24 23:33:37 [initandlisten] waiting for connections on port 30001 |
| "localhost:30000" |
| m30001| Mon Sep 24 23:33:37 [initandlisten] connection accepted from 127.0.0.1:49489 #1 (1 connection now open) |
| m30000| Mon Sep 24 23:33:37 [initandlisten] connection accepted from 127.0.0.1:49490 #2 (2 connections now open) |
| ShardingTest shard5 : |
| { |
| "config" : "localhost:30000", |
| "shards" : [ |
| connection to localhost:30000, |
| connection to localhost:30001 |
| ] |
| } |
| Mon Sep 24 23:33:37 shell: started program /Users/yellow/buildslave/OS_X_105_64bit_DUR_OFF/mongo/mongos --port 30999 --configdb localhost:30000 -vvvvvvvvvv --chunkSize 50 |
2012-09-24 23:33:40 EDT | m30999| Mon Sep 24 23:33:37 warning: running with 1 config server should be done only for testing purposes and is not recommended for production |
| m30999| Mon Sep 24 23:33:37 [mongosMain] MongoS version 2.3.0-pre- starting: pid=92464 port=30999 64-bit host=bs-mm1.local (--help for usage) |
| m30999| Mon Sep 24 23:33:37 [mongosMain] git version: 1930f5bc9170f2c4b061b8b416bb0a414fba5b7c |
| m30999| Mon Sep 24 23:33:37 [mongosMain] build info: Darwin bs-mm1.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_49 |
| m30999| Mon Sep 24 23:33:37 [mongosMain] options: { chunkSize: 50, configdb: "localhost:30000", port: 30999, vvvvvvvvvv: true } |
| m30999| Mon Sep 24 23:33:37 [mongosMain] config string : localhost:30000 |
| m30999| Mon Sep 24 23:33:37 [mongosMain] creating new connection to:localhost:30000 |
| m30999| Mon Sep 24 23:33:37 BackgroundJob starting: ConnectBG |
| m30000| Mon Sep 24 23:33:37 [initandlisten] connection accepted from 127.0.0.1:49492 #3 (3 connections now open) |
| m30999| Mon Sep 24 23:33:37 [mongosMain] connected connection! |
| m30999| Mon Sep 24 23:33:37 BackgroundJob starting: CheckConfigServers |
| m30999| Mon Sep 24 23:33:37 [CheckConfigServers] creating new connection to:localhost:30000 |
| m30999| Mon Sep 24 23:33:37 BackgroundJob starting: ConnectBG |
| m30000| Mon Sep 24 23:33:37 [initandlisten] connection accepted from 127.0.0.1:49493 #4 (4 connections now open) |
| m30999| Mon Sep 24 23:33:37 [CheckConfigServers] connected connection! |
| m30999| Mon Sep 24 23:33:37 [mongosMain] creating new connection to:localhost:30000 |
| m30999| Mon Sep 24 23:33:37 BackgroundJob starting: ConnectBG |
| m30000| Mon Sep 24 23:33:37 [initandlisten] connection accepted from 127.0.0.1:49494 #5 (5 connections now open) |
| m30999| Mon Sep 24 23:33:37 [mongosMain] connected connection! |
| m30999| Mon Sep 24 23:33:37 [mongosMain] Sending command { ismaster: 1 } to localhost:30000 with $auth: {} |
| m30999| Mon Sep 24 23:33:37 [mongosMain] Sending command { ismaster: 1 } to localhost:30000 with $auth: {} |
| m30000| Mon Sep 24 23:33:37 [FileAllocator] allocating new datafile /data/db/shard50/config.ns, filling with zeroes... |
| m30000| Mon Sep 24 23:33:37 [FileAllocator] creating directory /data/db/shard50/_tmp |
| m30000| Mon Sep 24 23:33:38 [FileAllocator] done allocating datafile /data/db/shard50/config.ns, size: 16MB, took 0.427 secs |
| m30000| Mon Sep 24 23:33:38 [FileAllocator] allocating new datafile /data/db/shard50/config.0, filling with zeroes... |
| m30000| Mon Sep 24 23:33:40 [FileAllocator] done allocating datafile /data/db/shard50/config.0, size: 64MB, took 1.7 secs |
| m30000| Mon Sep 24 23:33:40 [conn5] build index config.version { _id: 1 } |
| m30000| Mon Sep 24 23:33:40 [conn5] build index done. scanned 0 total records. 0 secs |
| m30000| Mon Sep 24 23:33:40 [conn5] insert config.version keyUpdates:0 locks(micros) w:2144321 2144ms |
| m30000| Mon Sep 24 23:33:40 [FileAllocator] allocating new datafile /data/db/shard50/config.1, filling with zeroes... |
| m30999| Mon Sep 24 23:33:40 [websvr] fd limit hard:10240 soft:10240 max conn: 8192 |
| m30999| Mon Sep 24 23:33:40 [websvr] admin web console waiting for connections on port 31999 |
| m30000| Mon Sep 24 23:33:40 [conn3] build index config.settings { _id: 1 } |
| m30999| Mon Sep 24 23:33:40 BackgroundJob starting: Balancer |
| m30999| Mon Sep 24 23:33:40 [Balancer] about to contact config servers and shards |
| m30999| Mon Sep 24 23:33:40 BackgroundJob starting: cursorTimeout |
| m30999| Mon Sep 24 23:33:40 BackgroundJob starting: PeriodicTask::Runner |
| m30999| Mon Sep 24 23:33:40 [mongosMain] fd limit hard:10240 soft:10240 max conn: 8192 |
| Mon Sep 24 23:33:40 shell: started program /Users/yellow/buildslave/OS_X_105_64bit_DUR_OFF/mongo/mongos --port 30998 --configdb localhost:30000 -vvvvvvvvvv --chunkSize 50 |
| m30998| Mon Sep 24 23:33:40 warning: running with 1 config server should be done only for testing purposes and is not recommended for production |
| m30998| Mon Sep 24 23:33:40 [mongosMain] MongoS version 2.3.0-pre- starting: pid=92465 port=30998 64-bit host=bs-mm1.local (--help for usage) |
| m30998| Mon Sep 24 23:33:40 [mongosMain] git version: 1930f5bc9170f2c4b061b8b416bb0a414fba5b7c |
| m30998| Mon Sep 24 23:33:40 [mongosMain] build info: Darwin bs-mm1.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_49 |
| m30998| Mon Sep 24 23:33:40 [mongosMain] options: { chunkSize: 50, configdb: "localhost:30000", port: 30998, vvvvvvvvvv: true } |
| m30998| Mon Sep 24 23:33:40 [mongosMain] config string : localhost:30000 |
| m30998| Mon Sep 24 23:33:40 [mongosMain] creating new connection to:localhost:30000 |
| m30998| Mon Sep 24 23:33:40 BackgroundJob starting: ConnectBG |
| m30000| Mon Sep 24 23:33:40 [initandlisten] connection accepted from 127.0.0.1:49508 #6 (6 connections now open) |
| m30998| Mon Sep 24 23:33:40 [mongosMain] connected connection! |
| m30000| Mon Sep 24 23:33:40 [conn3] build index done. scanned 0 total records. 0.137 secs |
| m30000| Mon Sep 24 23:33:40 [conn3] insert config.settings keyUpdates:0 locks(micros) w:137950 137ms |
| m30000| Mon Sep 24 23:33:40 [conn3] build index config.chunks { _id: 1 } |
| m30000| Mon Sep 24 23:33:40 [conn3] build index done. scanned 0 total records. 0 secs |
| m30000| Mon Sep 24 23:33:40 [conn3] info: creating collection config.chunks on add index |
| m30000| Mon Sep 24 23:33:40 [conn3] build index config.chunks { ns: 1, min: 1 } |
| m30000| Mon Sep 24 23:33:40 [conn3] build index done. scanned 0 total records. 0 secs |
| m30000| Mon Sep 24 23:33:40 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } |
| m30000| Mon Sep 24 23:33:40 [conn3] build index done. scanned 0 total records. 0 secs |
| m30000| Mon Sep 24 23:33:40 [conn3] build index config.chunks { ns: 1, lastmod: 1 } |
| m30000| Mon Sep 24 23:33:40 [conn3] build index done. scanned 0 total records. 0 secs |
| m30000| Mon Sep 24 23:33:40 [conn3] build index config.shards { _id: 1 } |
| m30000| Mon Sep 24 23:33:40 [conn3] build index done. scanned 0 total records. 0 secs |
| m30000| Mon Sep 24 23:33:40 [conn3] info: creating collection config.shards on add index |
| m30000| Mon Sep 24 23:33:40 [conn3] build index config.shards { host: 1 } |
| m30000| Mon Sep 24 23:33:40 [conn3] build index done. scanned 0 total records. 0 secs |
| m30000| Mon Sep 24 23:33:40 [conn5] build index config.mongos { _id: 1 } |
| m30000| Mon Sep 24 23:33:40 [conn5] build index done. scanned 0 total records. 0 secs |
| m30000| Mon Sep 24 23:33:40 [initandlisten] connection accepted from 127.0.0.1:49509 #7 (7 connections now open) |
| m30998| Mon Sep 24 23:33:40 BackgroundJob starting: CheckConfigServers |
| m30000| Mon Sep 24 23:33:40 [conn3] build index config.lockpings { _id: 1 } |
| m30000| Mon Sep 24 23:33:40 [conn3] build index done. scanned 0 total records. 0 secs |
| m30000| Mon Sep 24 23:33:40 [conn3] build index config.lockpings { ping: 1 } |
| m30000| Mon Sep 24 23:33:40 [conn3] build index done. scanned 1 total records. 0 secs |
| m30998| Mon Sep 24 23:33:40 [mongosMain] MaxChunkSize: 50 |
| m30998| Mon Sep 24 23:33:40 [websvr] fd limit hard:10240 soft:10240 max conn: 8192 |
| m30998| Mon Sep 24 23:33:40 [websvr] admin web console waiting for connections on port 31998 |
| m30000| Mon Sep 24 23:33:40 [conn7] build index config.locks { _id: 1 } |
| m30000| Mon Sep 24 23:33:40 [conn7] build index done. scanned 0 total records. 0 secs |
| m30998| Mon Sep 24 23:33:40 BackgroundJob starting: Balancer |
| m30998| Mon Sep 24 23:33:40 [Balancer] about to contact config servers and shards |
| m30998| Mon Sep 24 23:33:40 [Balancer] creating new connection to:localhost:30000 |
| m30998| Mon Sep 24 23:33:40 BackgroundJob starting: ConnectBG |
| m30000| Mon Sep 24 23:33:40 [initandlisten] connection accepted from 127.0.0.1:49510 #8 (8 connections now open) |
| m30998| Mon Sep 24 23:33:40 BackgroundJob starting: cursorTimeout |
| m30998| Mon Sep 24 23:33:40 [Balancer] connected connection! |
| m30998| Mon Sep 24 23:33:40 [Balancer] config servers and shards contacted successfully |
| m30998| Mon Sep 24 23:33:40 [Balancer] balancer id: bs-mm1.local:30998 started at Sep 24 23:33:40 |
| m30998| Mon Sep 24 23:33:40 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30998| Mon Sep 24 23:33:40 [Balancer] creating new connection to:localhost:30000 |
| m30998| Mon Sep 24 23:33:40 BackgroundJob starting: ConnectBG |
| m30000| Mon Sep 24 23:33:40 [initandlisten] connection accepted from 127.0.0.1:49511 #9 (9 connections now open) |
| m30998| Mon Sep 24 23:33:40 BackgroundJob starting: PeriodicTask::Runner |
| m30998| Mon Sep 24 23:33:40 [Balancer] connected connection! |
| m30999| Mon Sep 24 23:33:40 [mongosMain] waiting for connections on port 30999 |
| m30999| Mon Sep 24 23:33:40 [mongosMain] connection accepted from 127.0.0.1:49505 #1 (1 connection now open) |
| m30999| Mon Sep 24 23:33:40 [Balancer] config servers and shards contacted successfully |
| m30999| Mon Sep 24 23:33:40 [Balancer] balancer id: bs-mm1.local:30999 started at Sep 24 23:33:40 |
| m30999| Mon Sep 24 23:33:40 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Mon Sep 24 23:33:40 [Balancer] creating new connection to:localhost:30000 |
| m30999| Mon Sep 24 23:33:40 BackgroundJob starting: ConnectBG |
| m30999| Mon Sep 24 23:33:40 [Balancer] connected connection! |
| m30999| Mon Sep 24 23:33:40 [Balancer] Refreshing MaxChunkSize: 50 |
| m30999| Mon Sep 24 23:33:40 [Balancer] skew from remote server localhost:30000 found: 0 |
| m30999| Mon Sep 24 23:33:40 [Balancer] skew from remote server localhost:30000 found: 0 |
| m30999| Mon Sep 24 23:33:40 [Balancer] skew from remote server localhost:30000 found: 0 |
| m30999| Mon Sep 24 23:33:40 [Balancer] total clock skew of 0ms for servers localhost:30000 is in 30000ms bounds. |
| m30999| Mon Sep 24 23:33:40 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-mm1.local:30999:1348544020:16807 (sleeping for 30000ms) |
| m30999| Mon Sep 24 23:33:40 [LockPinger] distributed lock pinger 'localhost:30000/bs-mm1.local:30999:1348544020:16807' about to ping. |
| m30999| Mon Sep 24 23:33:40 [LockPinger] cluster localhost:30000 pinged successfully at Mon Sep 24 23:33:40 2012 by distributed lock pinger 'localhost:30000/bs-mm1.local:30999:1348544020:16807', sleeping for 30000ms |
| m30999| Mon Sep 24 23:33:40 [Balancer] inserting initial doc in config.locks for lock balancer |
| m30999| Mon Sep 24 23:33:40 [Balancer] about to acquire distributed lock 'balancer/bs-mm1.local:30999:1348544020:16807: |
| m30999| { "state" : 1, |
| m30999| "who" : "bs-mm1.local:30999:1348544020:16807:Balancer:282475249", |
| m30999| "process" : "bs-mm1.local:30999:1348544020:16807", |
| m30999| "when" : { "$date" : "Mon Sep 24 23:33:40 2012" }, |
| m30999| "why" : "doing balance round", |
| m30999| "ts" : { "$oid" : "5061261484ef5d5c2adfd35f" } } |
| m30999| { "_id" : "balancer", |
| m30999| "state" : 0 } |
| m30999| Mon Sep 24 23:33:40 [Balancer] distributed lock 'balancer/bs-mm1.local:30999:1348544020:16807' acquired, ts : 5061261484ef5d5c2adfd35f |
| m30999| Mon Sep 24 23:33:40 [Balancer] *** start balancing round |
| m30999| Mon Sep 24 23:33:40 [Balancer] no collections to balance |
| m30999| Mon Sep 24 23:33:40 [Balancer] no need to move any chunk |
| m30999| Mon Sep 24 23:33:40 [Balancer] *** end of balancing round |
| m30999| Mon Sep 24 23:33:40 [Balancer] distributed lock 'balancer/bs-mm1.local:30999:1348544020:16807' unlocked. |
| m30998| Mon Sep 24 23:33:40 [mongosMain] fd limit hard:10240 soft:10240 max conn: 8192 |
| m30998| Mon Sep 24 23:33:40 [mongosMain] waiting for connections on port 30998 |
| m30998| Mon Sep 24 23:33:40 [Balancer] Refreshing MaxChunkSize: 50 |
| m30998| Mon Sep 24 23:33:40 [Balancer] skew from remote server localhost:30000 found: -3 |
| m30998| Mon Sep 24 23:33:40 [Balancer] skew from remote server localhost:30000 found: 0 |
| m30998| Mon Sep 24 23:33:40 [Balancer] skew from remote server localhost:30000 found: -1 |
| m30998| Mon Sep 24 23:33:40 [Balancer] total clock skew of 0ms for servers localhost:30000 is in 30000ms bounds. |
| m30998| Mon Sep 24 23:33:40 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-mm1.local:30998:1348544020:16807 (sleeping for 30000ms) |
| m30998| Mon Sep 24 23:33:40 [LockPinger] distributed lock pinger 'localhost:30000/bs-mm1.local:30998:1348544020:16807' about to ping. |
| m30998| Mon Sep 24 23:33:40 [LockPinger] cluster localhost:30000 pinged successfully at Mon Sep 24 23:33:40 2012 by distributed lock pinger 'localhost:30000/bs-mm1.local:30998:1348544020:16807', sleeping for 30000ms |
| m30998| Mon Sep 24 23:33:40 [Balancer] about to acquire distributed lock 'balancer/bs-mm1.local:30998:1348544020:16807: |
| m30998| { "state" : 1, |
| m30998| "who" : "bs-mm1.local:30998:1348544020:16807:Balancer:282475249", |
| m30998| "process" : "bs-mm1.local:30998:1348544020:16807", |
| m30998| "when" : { "$date" : "Mon Sep 24 23:33:40 2012" }, |
| m30998| "why" : "doing balance round", |
| m30998| "ts" : { "$oid" : "50612614c475b0df672ac7a9" } } |
| m30998| { "_id" : "balancer", |
| m30998| "state" : 0, |
| m30998| "ts" : { "$oid" : "5061261484ef5d5c2adfd35f" } } |
| m30998| Mon Sep 24 23:33:40 [Balancer] distributed lock 'balancer/bs-mm1.local:30998:1348544020:16807' acquired, ts : 50612614c475b0df672ac7a9 |
| m30998| Mon Sep 24 23:33:40 [Balancer] *** start balancing round |
| m30998| Mon Sep 24 23:33:40 [Balancer] no collections to balance |
| m30998| Mon Sep 24 23:33:40 [Balancer] no need to move any chunk |
| m30998| Mon Sep 24 23:33:40 [Balancer] *** end of balancing round |
| m30998| Mon Sep 24 23:33:40 [Balancer] distributed lock 'balancer/bs-mm1.local:30998:1348544020:16807' unlocked. |
| m30998| Mon Sep 24 23:33:40 [mongosMain] connection accepted from 127.0.0.1:49513 #1 (1 connection now open) |
| ShardingTest undefined going to add shard : localhost:30000 |
| m30999| Mon Sep 24 23:33:40 [conn1] couldn't find database [admin] in config db |
| m30000| Mon Sep 24 23:33:40 [conn3] build index config.databases { _id: 1 } |
| m30000| Mon Sep 24 23:33:40 [conn3] build index done. scanned 0 total records. 0.007 secs |
| m30999| Mon Sep 24 23:33:40 [conn1] put [admin] on: config:localhost:30000 |
| m30999| Mon Sep 24 23:33:40 [conn1] Request::process begin ns: admin.$cmd msg id: 0 op: 2004 attempt: 0 |
| m30999| Mon Sep 24 23:33:40 [conn1] single query: admin.$cmd { addshard: "localhost:30000" } ntoreturn: -1 options : 0 |
| m30999| Mon Sep 24 23:33:40 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } |
| m30999| Mon Sep 24 23:33:40 [conn1] Request::process end ns: admin.$cmd msg id: 0 op: 2004 attempt: 0 1ms |
| { "shardAdded" : "shard0000", "ok" : 1 } |
| ShardingTest undefined going to add shard : localhost:30001 |
| m30999| Mon Sep 24 23:33:40 [conn1] Request::process begin ns: admin.$cmd msg id: 1 op: 2004 attempt: 0 |
| m30999| Mon Sep 24 23:33:40 [conn1] single query: admin.$cmd { addshard: "localhost:30001" } ntoreturn: -1 options : 0 |
| m30999| Mon Sep 24 23:33:40 [conn1] creating new connection to:localhost:30001 |
| m30999| Mon Sep 24 23:33:40 BackgroundJob starting: ConnectBG |
| m30001| Mon Sep 24 23:33:40 [initandlisten] connection accepted from 127.0.0.1:49514 #2 (2 connections now open) |
| m30999| Mon Sep 24 23:33:40 [conn1] connected connection! |
| m30999| Mon Sep 24 23:33:40 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } |
| m30999| Mon Sep 24 23:33:40 [conn1] Request::process end ns: admin.$cmd msg id: 1 op: 2004 attempt: 0 1ms |
| { "shardAdded" : "shard0001", "ok" : 1 } |
2012-09-24 23:33:46 EDT | m30999| Mon Sep 24 23:33:40 [conn1] Request::process begin ns: admin.$cmd msg id: 2 op: 2004 attempt: 0 |
| m30999| Mon Sep 24 23:33:40 [conn1] single query: admin.$cmd { enablesharding: "test" } ntoreturn: -1 options : 0 |
| m30999| Mon Sep 24 23:33:40 [conn1] couldn't find database [test] in config db |
| m30999| Mon Sep 24 23:33:40 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 0 writeLock: 0 |
| m30999| Mon Sep 24 23:33:40 [conn1] put [test] on: shard0001:localhost:30001 |
| m30999| Mon Sep 24 23:33:40 [conn1] enabling sharding on: test |
| m30999| Mon Sep 24 23:33:40 [conn1] Request::process end ns: admin.$cmd msg id: 2 op: 2004 attempt: 0 1ms |
| m30999| Mon Sep 24 23:33:40 [conn1] Request::process begin ns: admin.$cmd msg id: 3 op: 2004 attempt: 0 |
| m30999| Mon Sep 24 23:33:40 [conn1] single query: admin.$cmd { shardcollection: "test.foo", key: { num: 1.0 } } ntoreturn: -1 options : 0 |
| m30999| Mon Sep 24 23:33:40 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { num: 1.0 } } |
| m30001| Mon Sep 24 23:33:40 [FileAllocator] allocating new datafile /data/db/shard51/test.ns, filling with zeroes... |
| m30001| Mon Sep 24 23:33:40 [FileAllocator] creating directory /data/db/shard51/_tmp |
| m30999| Mon Sep 24 23:33:40 [conn1] enable sharding on: test.foo with shard key: { num: 1.0 } |
| m30001| Mon Sep 24 23:33:41 [FileAllocator] done allocating datafile /data/db/shard51/test.ns, size: 16MB, took 1.226 secs |
| m30001| Mon Sep 24 23:33:41 [FileAllocator] allocating new datafile /data/db/shard51/test.0, filling with zeroes... |
| m30001| Mon Sep 24 23:33:46 [FileAllocator] done allocating datafile /data/db/shard51/test.0, size: 64MB, took 4.46 secs |
| m30001| Mon Sep 24 23:33:46 [FileAllocator] allocating new datafile /data/db/shard51/test.1, filling with zeroes... |
| m30001| Mon Sep 24 23:33:46 [conn2] build index test.foo { _id: 1 } |
| m30999| Mon Sep 24 23:33:46 [Balancer] Refreshing MaxChunkSize: 50 |
| m30999| Mon Sep 24 23:33:46 [Balancer] creating new connection to:localhost:30001 |
| m30999| Mon Sep 24 23:33:46 BackgroundJob starting: ConnectBG |
| m30001| Mon Sep 24 23:33:46 [initandlisten] connection accepted from 127.0.0.1:49515 #3 (3 connections now open) |
| m30999| Mon Sep 24 23:33:46 [Balancer] connected connection! |
| m30999| Mon Sep 24 23:33:46 [Balancer] about to acquire distributed lock 'balancer/bs-mm1.local:30999:1348544020:16807: |
| m30999| { "state" : 1, |
| m30999| "who" : "bs-mm1.local:30999:1348544020:16807:Balancer:282475249", |
| m30999| "process" : "bs-mm1.local:30999:1348544020:16807", |
| m30999| "when" : { "$date" : "Mon Sep 24 23:33:46 2012" }, |
| m30999| "why" : "doing balance round", |
| m30999| "ts" : { "$oid" : "5061261a84ef5d5c2adfd360" } } |
| m30999| { "_id" : "balancer", |
| m30999| "state" : 0, |
| m30999| "ts" : { "$oid" : "50612614c475b0df672ac7a9" } } |
| m30999| Mon Sep 24 23:33:46 [Balancer] distributed lock 'balancer/bs-mm1.local:30999:1348544020:16807' acquired, ts : 5061261a84ef5d5c2adfd360 |
| m30999| Mon Sep 24 23:33:46 [Balancer] *** start balancing round |
| m30999| Mon Sep 24 23:33:46 [Balancer] no collections to balance |
| m30999| Mon Sep 24 23:33:46 [Balancer] no need to move any chunk |
| m30999| Mon Sep 24 23:33:46 [Balancer] *** end of balancing round |
| m30999| Mon Sep 24 23:33:46 [Balancer] distributed lock 'balancer/bs-mm1.local:30999:1348544020:16807' unlocked. |
| m30001| Mon Sep 24 23:33:46 [conn2] build index done. scanned 0 total records. 0.237 secs |
| m30001| Mon Sep 24 23:33:46 [conn2] info: creating collection test.foo on add index |
| m30001| Mon Sep 24 23:33:46 [conn2] build index test.foo { num: 1.0 } |
| m30001| Mon Sep 24 23:33:46 [conn2] build index done. scanned 0 total records. 0 secs |
| m30001| Mon Sep 24 23:33:46 [conn2] insert test.system.indexes keyUpdates:0 locks(micros) w:5941076 5941ms |
| m30999| Mon Sep 24 23:33:46 [conn1] going to create 1 chunk(s) for: test.foo using new epoch 5061261a84ef5d5c2adfd361 |
| m30999| Mon Sep 24 23:33:46 [conn1] found 1 new chunks for collection test.foo (tracking 1), new version is 0x100b052a0 |
| m30999| Mon Sep 24 23:33:46 [conn1] loaded 1 chunks into new chunk manager for test.foo with version 1|0||5061261a84ef5d5c2adfd361 |
| m30999| Mon Sep 24 23:33:46 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||5061261a84ef5d5c2adfd361 based on: (empty) |
| m30000| Mon Sep 24 23:33:46 [conn3] build index config.collections { _id: 1 } |
| m30000| Mon Sep 24 23:33:46 [conn3] build index done. scanned 0 total records. 0 secs |
| m30999| Mon Sep 24 23:33:46 [conn1] creating new connection to:localhost:30000 |
| m30999| Mon Sep 24 23:33:46 BackgroundJob starting: ConnectBG |
| m30000| Mon Sep 24 23:33:46 [initandlisten] connection accepted from 127.0.0.1:49516 #10 (10 connections now open) |
| m30999| Mon Sep 24 23:33:46 [conn1] connected connection! |
| m30999| Mon Sep 24 23:33:46 [conn1] creating WriteBackListener for: localhost:30000 serverID: 5061261484ef5d5c2adfd35e |
| m30999| Mon Sep 24 23:33:46 BackgroundJob starting: WriteBackListener-localhost:30000 |
| m30999| Mon Sep 24 23:33:46 [conn1] initializing shard connection to localhost:30000 |
| m30999| Mon Sep 24 23:33:46 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('5061261484ef5d5c2adfd35e'), authoritative: true } |
| m30999| Mon Sep 24 23:33:46 [conn1] Sending command { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('5061261484ef5d5c2adfd35e'), authoritative: true } to localhost:30000 with $auth: {} |
| m30999| Mon Sep 24 23:33:46 [conn1] initial sharding result : { initialized: true, ok: 1.0 } |
| m30999| Mon Sep 24 23:33:46 [conn1] resetting shard version of test.foo on localhost:30000, version is zero |
| m30999| Mon Sep 24 23:33:46 [conn1] have to set shard version for conn: localhost:30000 ns:test.foo my last seq: 0 current: 2 version: 0|0||000000000000000000000000 manager: 0x100b05160 |
| m30999| Mon Sep 24 23:33:46 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('5061261484ef5d5c2adfd35e'), shard: "shard0000", shardHost: "localhost:30000" } 0x100b05bf0 |
| m30999| Mon Sep 24 23:33:46 [conn1] Sending command { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('5061261484ef5d5c2adfd35e'), shard: "shard0000", shardHost: "localhost:30000" } to localhost:30000 with $auth: {} |
| m30999| Mon Sep 24 23:33:46 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } |
| m30999| Mon Sep 24 23:33:46 [conn1] creating new connection to:localhost:30001 |
| m30999| Mon Sep 24 23:33:46 BackgroundJob starting: ConnectBG |
| m30001| Mon Sep 24 23:33:46 [initandlisten] connection accepted from 127.0.0.1:49517 #4 (4 connections now open) |
| m30999| Mon Sep 24 23:33:46 [conn1] connected connection! |
| m30999| Mon Sep 24 23:33:46 [conn1] creating WriteBackListener for: localhost:30001 serverID: 5061261484ef5d5c2adfd35e |
| m30999| Mon Sep 24 23:33:46 BackgroundJob starting: WriteBackListener-localhost:30001 |
| m30999| Mon Sep 24 23:33:46 [conn1] initializing shard connection to localhost:30001 |
| m30999| Mon Sep 24 23:33:46 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('5061261484ef5d5c2adfd35e'), authoritative: true } |
| m30999| Mon Sep 24 23:33:46 [conn1] Sending command { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('5061261484ef5d5c2adfd35e'), authoritative: true } to localhost:30001 with $auth: {} |
| m30999| Mon Sep 24 23:33:46 [conn1] initial sharding result : { initialized: true, ok: 1.0 } |
| m30999| Mon Sep 24 23:33:46 [conn1] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 0 current: 2 version: 1|0||5061261a84ef5d5c2adfd361 manager: 0x100b05160 |
| m30999| Mon Sep 24 23:33:46 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('5061261a84ef5d5c2adfd361'), serverID: ObjectId('5061261484ef5d5c2adfd35e'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b07720 |
| m30999| Mon Sep 24 23:33:46 [conn1] Sending command { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('5061261a84ef5d5c2adfd361'), serverID: ObjectId('5061261484ef5d5c2adfd35e'), shard: "shard0001", shardHost: "localhost:30001" } to localhost:30001 with $auth: {} |
| m30999| Mon Sep 24 23:33:46 [conn1] setShardVersion failed! |
| m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 } |
| m30999| Mon Sep 24 23:33:46 [conn1] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 0 current: 2 version: 1|0||5061261a84ef5d5c2adfd361 manager: 0x100b05160 |
| m30999| Mon Sep 24 23:33:46 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('5061261a84ef5d5c2adfd361'), serverID: ObjectId('5061261484ef5d5c2adfd35e'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x100b07720 |
| m30999| Mon Sep 24 23:33:46 [conn1] Sending command { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('5061261a84ef5d5c2adfd361'), serverID: ObjectId('5061261484ef5d5c2adfd35e'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } to localhost:30001 with $auth: {} |
| m30001| Mon Sep 24 23:33:46 [conn4] no current chunk manager found for this shard, will initialize |
| m30000| Mon Sep 24 23:33:46 [initandlisten] connection accepted from 127.0.0.1:49518 #11 (11 connections now open) |
| m30999| Mon Sep 24 23:33:46 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process end ns: admin.$cmd msg id: 3 op: 2004 attempt: 0 5953ms |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process begin ns: test.foo msg id: 4 op: 2002 attempt: 0 |
| m30999| Mon Sep 24 23:33:46 [conn1] write: test.foo |
| m30999| Mon Sep 24 23:33:46 [conn1] inserting 1 documents to shard shard0001:localhost:30001 at version 1|0||5061261a84ef5d5c2adfd361 |
| m30999| Mon Sep 24 23:33:46 [conn1] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey } dataWritten: 1161125 splitThreshold: 921 |
| m30999| Mon Sep 24 23:33:46 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process end ns: test.foo msg id: 4 op: 2002 attempt: 0 1ms |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process begin ns: test.foo msg id: 5 op: 2002 attempt: 0 |
| m30999| Mon Sep 24 23:33:46 [conn1] write: test.foo |
| m30999| Mon Sep 24 23:33:46 [conn1] inserting 1 documents to shard shard0001:localhost:30001 at version 1|0||5061261a84ef5d5c2adfd361 |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process end ns: test.foo msg id: 5 op: 2002 attempt: 0 0ms |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process begin ns: test.foo msg id: 6 op: 2002 attempt: 0 |
| m30999| Mon Sep 24 23:33:46 [conn1] write: test.foo |
| m30999| Mon Sep 24 23:33:46 [conn1] inserting 1 documents to shard shard0001:localhost:30001 at version 1|0||5061261a84ef5d5c2adfd361 |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process end ns: test.foo msg id: 6 op: 2002 attempt: 0 0ms |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process begin ns: test.foo msg id: 7 op: 2002 attempt: 0 |
| m30999| Mon Sep 24 23:33:46 [conn1] write: test.foo |
| m30999| Mon Sep 24 23:33:46 [conn1] inserting 1 documents to shard shard0001:localhost:30001 at version 1|0||5061261a84ef5d5c2adfd361 |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process end ns: test.foo msg id: 7 op: 2002 attempt: 0 0ms |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process begin ns: test.foo msg id: 8 op: 2002 attempt: 0 |
| m30999| Mon Sep 24 23:33:46 [conn1] write: test.foo |
| m30999| Mon Sep 24 23:33:46 [conn1] inserting 1 documents to shard shard0001:localhost:30001 at version 1|0||5061261a84ef5d5c2adfd361 |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process end ns: test.foo msg id: 8 op: 2002 attempt: 0 0ms |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process begin ns: test.foo msg id: 9 op: 2002 attempt: 0 |
| m30999| Mon Sep 24 23:33:46 [conn1] write: test.foo |
| m30999| Mon Sep 24 23:33:46 [conn1] inserting 1 documents to shard shard0001:localhost:30001 at version 1|0||5061261a84ef5d5c2adfd361 |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process end ns: test.foo msg id: 9 op: 2002 attempt: 0 0ms |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process begin ns: test.foo msg id: 10 op: 2002 attempt: 0 |
| m30999| Mon Sep 24 23:33:46 [conn1] write: test.foo |
| m30999| Mon Sep 24 23:33:46 [conn1] inserting 1 documents to shard shard0001:localhost:30001 at version 1|0||5061261a84ef5d5c2adfd361 |
| m30999| Mon Sep 24 23:33:46 [conn1] about to initiate autosplit: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey } dataWritten: 210 splitThreshold: 921 |
| m30999| Mon Sep 24 23:33:46 [conn1] chunk not full enough to trigger auto-split no split entry |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process end ns: test.foo msg id: 10 op: 2002 attempt: 0 0ms |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process begin ns: test.foo msg id: 11 op: 2004 attempt: 0 |
| m30999| Mon Sep 24 23:33:46 [conn1] shard query: test.foo {} |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||5061261a84ef5d5c2adfd361] |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] initialized query (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.foo @ 1|0||5061261a84ef5d5c2adfd361", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] finishing over 1 shards |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.foo @ 1|0||5061261a84ef5d5c2adfd361", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||5061261a84ef5d5c2adfd361", cursor: { _id: ObjectId('5061261a100dfb70ea7a678f'), num: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } |
| m30999| Mon Sep 24 23:33:46 [conn1] cursor type: ParallelSort |
| m30999| Mon Sep 24 23:33:46 [conn1] Matcher::matches() { _id: ObjectId('5061261a100dfb70ea7a678f'), num: 1.0 } |
| m30999| Mon Sep 24 23:33:46 [conn1] Matcher::matches() { _id: ObjectId('5061261a100dfb70ea7a6790'), num: 2.0 } |
| m30999| Mon Sep 24 23:33:46 [conn1] Matcher::matches() { _id: ObjectId('5061261a100dfb70ea7a6791'), num: 3.0 } |
| m30999| Mon Sep 24 23:33:46 [conn1] Matcher::matches() { _id: ObjectId('5061261a100dfb70ea7a6792'), num: 4.0 } |
| m30999| Mon Sep 24 23:33:46 [conn1] Matcher::matches() { _id: ObjectId('5061261a100dfb70ea7a6793'), num: 5.0 } |
| m30999| Mon Sep 24 23:33:46 [conn1] Matcher::matches() { _id: ObjectId('5061261a100dfb70ea7a6794'), num: 6.0 } |
| m30999| Mon Sep 24 23:33:46 [conn1] Matcher::matches() { _id: ObjectId('5061261a100dfb70ea7a6795'), num: 7.0 } |
| m30999| Mon Sep 24 23:33:46 [conn1] hasMore: 0 sendMore: 1 cursorMore: 0 ntoreturn: 0 num: 7 wouldSendMoreIfHad: 1 id:1467286779789100987 totalSent: 0 |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process end ns: test.foo msg id: 11 op: 2004 attempt: 0 0ms |
| m30998| Mon Sep 24 23:33:46 [conn1] DBConfig unserialize: test { _id: "test", partitioned: true, primary: "shard0001" } |
| m30998| Mon Sep 24 23:33:46 [conn1] found 1 new chunks for collection test.foo (tracking 1), new version is 0x100912820 |
| m30998| Mon Sep 24 23:33:46 [conn1] loaded 1 chunks into new chunk manager for test.foo with version 1|0||5061261a84ef5d5c2adfd361 |
| m30998| Mon Sep 24 23:33:46 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||5061261a84ef5d5c2adfd361 based on: (empty) |
| m30998| Mon Sep 24 23:33:46 [conn1] found 0 dropped collections and 1 sharded collections for database test |
| m30998| Mon Sep 24 23:33:46 [conn1] Request::process begin ns: test.foo msg id: 12 op: 2004 attempt: 0 |
| m30998| Mon Sep 24 23:33:46 [conn1] shard query: test.foo {} |
| m30998| Mon Sep 24 23:33:46 [conn1] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } |
| m30998| Mon Sep 24 23:33:46 [conn1] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||5061261a84ef5d5c2adfd361] |
| m30998| Mon Sep 24 23:33:46 [conn1] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } |
| m30998| Mon Sep 24 23:33:46 [conn1] creating new connection to:localhost:30000 |
| m30998| Mon Sep 24 23:33:46 BackgroundJob starting: ConnectBG |
| m30000| Mon Sep 24 23:33:46 [initandlisten] connection accepted from 127.0.0.1:49519 #12 (12 connections now open) |
| m30998| Mon Sep 24 23:33:46 [conn1] connected connection! |
| m30998| Mon Sep 24 23:33:46 [conn1] creating WriteBackListener for: localhost:30000 serverID: 50612614c475b0df672ac7a8 |
| m30998| Mon Sep 24 23:33:46 BackgroundJob starting: WriteBackListener-localhost:30000 |
| m30998| Mon Sep 24 23:33:46 [conn1] initializing shard connection to localhost:30000 |
| m30998| Mon Sep 24 23:33:46 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('50612614c475b0df672ac7a8'), authoritative: true } |
| m30998| Mon Sep 24 23:33:46 [conn1] Sending command { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('50612614c475b0df672ac7a8'), authoritative: true } to localhost:30000 with $auth: {} |
| m30998| Mon Sep 24 23:33:46 [conn1] initial sharding result : { initialized: true, ok: 1.0 } |
| m30998| Mon Sep 24 23:33:46 [conn1] resetting shard version of test.foo on localhost:30000, version is zero |
| m30998| Mon Sep 24 23:33:46 [conn1] have to set shard version for conn: localhost:30000 ns:test.foo my last seq: 0 current: 2 version: 0|0||000000000000000000000000 manager: 0x1009126e0 |
| m30998| Mon Sep 24 23:33:46 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('50612614c475b0df672ac7a8'), shard: "shard0000", shardHost: "localhost:30000" } 0x100912e50 |
| m30998| Mon Sep 24 23:33:46 [conn1] Sending command { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('50612614c475b0df672ac7a8'), shard: "shard0000", shardHost: "localhost:30000" } to localhost:30000 with $auth: {} |
| m30998| Mon Sep 24 23:33:46 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } |
| m30998| Mon Sep 24 23:33:46 [conn1] creating new connection to:localhost:30001 |
| m30998| Mon Sep 24 23:33:46 BackgroundJob starting: ConnectBG |
| m30998| Mon Sep 24 23:33:46 [Balancer] creating new connection to:localhost:30000 |
| m30998| Mon Sep 24 23:33:46 BackgroundJob starting: ConnectBG |
| m30001| Mon Sep 24 23:33:46 [initandlisten] connection accepted from 127.0.0.1:49521 #5 (5 connections now open) |
| m30000| Mon Sep 24 23:33:46 [initandlisten] connection accepted from 127.0.0.1:49522 #13 (13 connections now open) |
| m30998| Mon Sep 24 23:33:46 [conn1] connected connection! |
| m30998| Mon Sep 24 23:33:46 [conn1] creating WriteBackListener for: localhost:30001 serverID: 50612614c475b0df672ac7a8 |
| m30998| Mon Sep 24 23:33:46 [Balancer] connected connection! |
| m30998| Mon Sep 24 23:33:46 BackgroundJob starting: WriteBackListener-localhost:30001 |
| m30998| Mon Sep 24 23:33:46 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001 |
| m30998| Mon Sep 24 23:33:46 BackgroundJob starting: ConnectBG |
| m30998| Mon Sep 24 23:33:46 [conn1] initializing shard connection to localhost:30001 |
| m30998| Mon Sep 24 23:33:46 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('50612614c475b0df672ac7a8'), authoritative: true } |
| m30998| Mon Sep 24 23:33:46 [conn1] Sending command { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('50612614c475b0df672ac7a8'), authoritative: true } to localhost:30001 with $auth: {} |
| m30001| Mon Sep 24 23:33:46 [initandlisten] connection accepted from 127.0.0.1:49523 #6 (6 connections now open) |
| m30998| Mon Sep 24 23:33:46 [Balancer] Refreshing MaxChunkSize: 50 |
| m30998| Mon Sep 24 23:33:46 [conn1] initial sharding result : { initialized: true, ok: 1.0 } |
| m30998| Mon Sep 24 23:33:46 [conn1] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 0 current: 2 version: 1|0||5061261a84ef5d5c2adfd361 manager: 0x1009126e0 |
| m30998| Mon Sep 24 23:33:46 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('5061261a84ef5d5c2adfd361'), serverID: ObjectId('50612614c475b0df672ac7a8'), shard: "shard0001", shardHost: "localhost:30001" } 0x100b01950 |
| m30998| Mon Sep 24 23:33:46 [conn1] Sending command { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('5061261a84ef5d5c2adfd361'), serverID: ObjectId('50612614c475b0df672ac7a8'), shard: "shard0001", shardHost: "localhost:30001" } to localhost:30001 with $auth: {} |
| m30998| Mon Sep 24 23:33:46 [WriteBackListener-localhost:30001] connected connection! |
| m30998| Mon Sep 24 23:33:46 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } |
| m30998| Mon Sep 24 23:33:46 [conn1] [pcursor] initialized query (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.foo @ 1|0||5061261a84ef5d5c2adfd361", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
| m30998| Mon Sep 24 23:33:46 [conn1] [pcursor] finishing over 1 shards |
| m30998| Mon Sep 24 23:33:46 [Balancer] creating new connection to:localhost:30001 |
| m30998| Mon Sep 24 23:33:46 BackgroundJob starting: ConnectBG |
| m30998| Mon Sep 24 23:33:46 [conn1] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "test.foo @ 1|0||5061261a84ef5d5c2adfd361", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
| m30998| Mon Sep 24 23:33:46 [conn1] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||5061261a84ef5d5c2adfd361", cursor: { _id: ObjectId('5061261a100dfb70ea7a678f'), num: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } |
| m30998| Mon Sep 24 23:33:46 [conn1] cursor type: ParallelSort |
| m30998| Mon Sep 24 23:33:46 [conn1] Matcher::matches() { _id: ObjectId('5061261a100dfb70ea7a678f'), num: 1.0 } |
| m30998| Mon Sep 24 23:33:46 [conn1] Matcher::matches() { _id: ObjectId('5061261a100dfb70ea7a6790'), num: 2.0 } |
| m30998| Mon Sep 24 23:33:46 [conn1] Matcher::matches() { _id: ObjectId('5061261a100dfb70ea7a6791'), num: 3.0 } |
| m30998| Mon Sep 24 23:33:46 [conn1] Matcher::matches() { _id: ObjectId('5061261a100dfb70ea7a6792'), num: 4.0 } |
| m30998| Mon Sep 24 23:33:46 [conn1] Matcher::matches() { _id: ObjectId('5061261a100dfb70ea7a6793'), num: 5.0 } |
| m30998| Mon Sep 24 23:33:46 [conn1] Matcher::matches() { _id: ObjectId('5061261a100dfb70ea7a6794'), num: 6.0 } |
| m30998| Mon Sep 24 23:33:46 [conn1] Matcher::matches() { _id: ObjectId('5061261a100dfb70ea7a6795'), num: 7.0 } |
| m30998| Mon Sep 24 23:33:46 [conn1] hasMore: 0 sendMore: 1 cursorMore: 0 ntoreturn: 0 num: 7 wouldSendMoreIfHad: 1 id:812990919161451999 totalSent: 0 |
| m30998| Mon Sep 24 23:33:46 [conn1] Request::process end ns: test.foo msg id: 12 op: 2004 attempt: 0 4ms |
| m30001| Mon Sep 24 23:33:46 [initandlisten] connection accepted from 127.0.0.1:49524 #7 (7 connections now open) |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process begin ns: admin.$cmd msg id: 13 op: 2004 attempt: 0 |
| m30999| Mon Sep 24 23:33:46 [conn1] single query: admin.$cmd { split: "test.foo", middle: { num: 4.0 } } ntoreturn: -1 options : 0 |
| m30999| Mon Sep 24 23:33:46 [conn1] splitting: test.foo shard: ns:test.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { num: MinKey } max: { num: MaxKey } |
| m30000| Mon Sep 24 23:33:46 [initandlisten] connection accepted from 127.0.0.1:49525 #14 (14 connections now open) |
| m30001| Mon Sep 24 23:33:46 [conn3] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: MinKey }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 4.0 } ], shardId: "test.foo-num_MinKey", configdb: "localhost:30000" } |
| m30001| Mon Sep 24 23:33:46 [conn3] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Mon Sep 24 23:33:46 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-mm1.local:30001:1348544026:1391051266 (sleeping for 30000ms) |
| m30001| Mon Sep 24 23:33:46 [conn3] distributed lock 'test.foo/bs-mm1.local:30001:1348544026:1391051266' acquired, ts : 5061261a2e1bffcddee18cc8 |
| m30998| Mon Sep 24 23:33:46 [Balancer] connected connection! |
| m30001| Mon Sep 24 23:33:46 [conn3] splitChunk accepted at version 1|0||5061261a84ef5d5c2adfd361 |
| m30998| Mon Sep 24 23:33:46 [Balancer] about to acquire distributed lock 'balancer/bs-mm1.local:30998:1348544020:16807: |
| m30998| { "state" : 1, |
| m30998| "who" : "bs-mm1.local:30998:1348544020:16807:Balancer:282475249", |
| m30998| "process" : "bs-mm1.local:30998:1348544020:16807", |
| m30998| "when" : { "$date" : "Mon Sep 24 23:33:46 2012" }, |
| m30998| "why" : "doing balance round", |
| m30998| "ts" : { "$oid" : "5061261ac475b0df672ac7aa" } } |
| m30998| { "_id" : "balancer", |
| m30998| "state" : 0, |
| m30998| "ts" : { "$oid" : "5061261a84ef5d5c2adfd360" } } |
| m30001| Mon Sep 24 23:33:46 [conn3] about to log metadata event: { _id: "bs-mm1.local-2012-09-25T03:33:46-0", server: "bs-mm1.local", clientAddr: "127.0.0.1:49515", time: new Date(1348544026279), what: "split", ns: "test.foo", details: { before: { min: { num: MinKey }, max: { num: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: MinKey }, max: { num: 4.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5061261a84ef5d5c2adfd361') }, right: { min: { num: 4.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5061261a84ef5d5c2adfd361') } } } |
| m30000| Mon Sep 24 23:33:46 [conn11] build index config.changelog { _id: 1 } |
| m30000| Mon Sep 24 23:33:46 [conn11] build index done. scanned 0 total records. 0 secs |
| m30001| Mon Sep 24 23:33:46 [conn3] distributed lock 'test.foo/bs-mm1.local:30001:1348544026:1391051266' unlocked. |
| m30999| Mon Sep 24 23:33:46 [conn1] loading chunk manager for collection test.foo using old chunk manager w/ version 1|0||5061261a84ef5d5c2adfd361 and 1 chunks |
| m30998| Mon Sep 24 23:33:46 [Balancer] distributed lock 'balancer/bs-mm1.local:30998:1348544020:16807' acquired, ts : 5061261ac475b0df672ac7aa |
| m30998| Mon Sep 24 23:33:46 [Balancer] *** start balancing round |
| m30999| Mon Sep 24 23:33:46 [conn1] found 2 new chunks for collection test.foo (tracking 2), new version is 0x100b04c80 |
| m30999| Mon Sep 24 23:33:46 [conn1] loaded 2 chunks into new chunk manager for test.foo with version 1|2||5061261a84ef5d5c2adfd361 |
| m30999| Mon Sep 24 23:33:46 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||5061261a84ef5d5c2adfd361 based on: 1|0||5061261a84ef5d5c2adfd361 |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process end ns: admin.$cmd msg id: 13 op: 2004 attempt: 0 10ms |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process begin ns: config.databases msg id: 14 op: 2004 attempt: 0 |
| m30999| Mon Sep 24 23:33:46 [conn1] shard query: config.databases { _id: "test" } |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: -1, options: 0, query: { _id: "test" }, fields: {} } and CInfo { v_ns: "", filter: {} } |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] finishing over 1 shards |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
| m30000| Mon Sep 24 23:33:46 [conn9] build index config.tags { _id: 1 } |
| m30000| Mon Sep 24 23:33:46 [conn9] build index done. scanned 0 total records. 0 secs |
| m30000| Mon Sep 24 23:33:46 [conn9] info: creating collection config.tags on add index |
| m30000| Mon Sep 24 23:33:46 [conn9] build index config.tags { ns: 1, min: 1 } |
| m30000| Mon Sep 24 23:33:46 [conn9] build index done. scanned 0 total records. 0 secs |
| m30998| Mon Sep 24 23:33:46 [Balancer] shard0001 has more chunks me:2 best: shard0000:0 |
| m30998| Mon Sep 24 23:33:46 [Balancer] collection : test.foo |
| m30998| Mon Sep 24 23:33:46 [Balancer] donor : shard0001 chunks on 2 |
| m30998| Mon Sep 24 23:33:46 [Balancer] receiver : shard0000 chunks on 0 |
| m30998| Mon Sep 24 23:33:46 [Balancer] threshold : 2 |
| m30998| Mon Sep 24 23:33:46 [Balancer] ns: test.foo going to move { _id: "test.foo-num_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5061261a84ef5d5c2adfd361'), ns: "test.foo", min: { num: MinKey }, max: { num: 4.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test", partitioned: true, primary: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } |
| m30999| Mon Sep 24 23:33:46 [conn1] cursor type: ParallelSort |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process end ns: config.databases msg id: 14 op: 2004 attempt: 0 1ms |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process begin ns: config.shards msg id: 15 op: 2004 attempt: 0 |
| m30999| Mon Sep 24 23:33:46 [conn1] shard query: config.shards { _id: "shard0001" } |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: -1, options: 0, query: { _id: "shard0001" }, fields: {} } and CInfo { v_ns: "", filter: {} } |
| Mon Sep 24 23:33:46 uncaught exception: command { |
| "movechunk" : "test.foo", |
| "find" : { |
| "num" : 3 |
| }, |
| "to" : "localhost:30000" |
| } failed: { |
| "cause" : { |
| "who" : { |
| "_id" : "test.foo", |
| "process" : "bs-mm1.local:30001:1348544026:1391051266", |
| "state" : 1, |
| "ts" : ObjectId("5061261a2e1bffcddee18cc9"), |
| "when" : ISODate("2012-09-25T03:33:46.287Z"), |
| "who" : "bs-mm1.local:30001:1348544026:1391051266:conn7:1553470752", |
| "why" : "migrate-{ num: MinKey }" |
| }, |
| "errmsg" : "the collection metadata could not be locked with lock migrate-{ num: MinKey }", |
| "ok" : 0 |
| }, |
| "ok" : 0, |
| "errmsg" : "move failed" |
| } |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } |
| failed to load: m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
| /Users/yellow/buildslave/OS_X_105_64bit_DUR_OFF/mongo/jstests/sharding/shard5.js m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] finishing over 1 shards |
| |
| m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } |
2012-09-24 23:33:54 EDT | m30999| Mon Sep 24 23:33:46 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } |
| m30999| Mon Sep 24 23:33:46 [conn1] cursor type: ParallelSort |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process end ns: config.shards msg id: 15 op: 2004 attempt: 0 0ms |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process begin ns: admin.$cmd msg id: 16 op: 2004 attempt: 0 |
| m30999| Mon Sep 24 23:33:46 [conn1] single query: admin.$cmd { movechunk: "test.foo", find: { num: 3.0 }, to: "localhost:30000" } ntoreturn: -1 options : 0 |
| m30999| Mon Sep 24 23:33:46 [conn1] CMD: movechunk: { movechunk: "test.foo", find: { num: 3.0 }, to: "localhost:30000" } |
| m30999| Mon Sep 24 23:33:46 [conn1] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { num: MinKey } max: { num: 4.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 |
| m30999| Mon Sep 24 23:33:46 [conn1] moveChunk result: { who: { _id: "test.foo", process: "bs-mm1.local:30001:1348544026:1391051266", state: 1, ts: ObjectId('5061261a2e1bffcddee18cc9'), when: new Date(1348544026287), who: "bs-mm1.local:30001:1348544026:1391051266:conn7:1553470752", why: "migrate-{ num: MinKey }" }, errmsg: "the collection metadata could not be locked with lock migrate-{ num: MinKey }", ok: 0.0 } |
| m30999| Mon Sep 24 23:33:46 [conn1] Request::process end ns: admin.$cmd msg id: 16 op: 2004 attempt: 0 5ms |
| m30000| Mon Sep 24 23:33:46 [initandlisten] connection accepted from 127.0.0.1:49526 #15 (15 connections now open) |
| m30000| Mon Sep 24 23:33:46 [initandlisten] connection accepted from 127.0.0.1:49527 #16 (16 connections now open) |
| m30998| Mon Sep 24 23:33:46 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 1|0||5061261a84ef5d5c2adfd361 and 1 chunks |
| m30998| Mon Sep 24 23:33:46 [Balancer] found 2 new chunks for collection test.foo (tracking 2), new version is 0x100914cd0 |
| m30998| Mon Sep 24 23:33:46 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 1|2||5061261a84ef5d5c2adfd361 |
| m30998| Mon Sep 24 23:33:46 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||5061261a84ef5d5c2adfd361 based on: 1|0||5061261a84ef5d5c2adfd361 |
| m30998| Mon Sep 24 23:33:46 [Balancer] moving chunk ns: test.foo moving ( ns:test.foo at: shard0001:localhost:30001 lastmod: 1|1||000000000000000000000000 min: { num: MinKey } max: { num: 4.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 |
| m30000| Mon Sep 24 23:33:46 got signal 15 (Terminated), will terminate after current cmd ends |
| m30001| Mon Sep 24 23:33:46 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: MinKey }, max: { num: 4.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_MinKey", configdb: "localhost:30000", secondaryThrottle: false } |
| m30001| Mon Sep 24 23:33:46 [conn7] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Mon Sep 24 23:33:46 [conn3] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: MinKey }, max: { num: 4.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_MinKey", configdb: "localhost:30000", secondaryThrottle: false } |
| m30001| Mon Sep 24 23:33:46 [conn3] created new distributed lock for test.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Mon Sep 24 23:33:46 [conn7] distributed lock 'test.foo/bs-mm1.local:30001:1348544026:1391051266' acquired, ts : 5061261a2e1bffcddee18cc9 |
| m30001| Mon Sep 24 23:33:46 [conn7] about to log metadata event: { _id: "bs-mm1.local-2012-09-25T03:33:46-1", server: "bs-mm1.local", clientAddr: "127.0.0.1:49524", time: new Date(1348544026288), what: "moveChunk.start", ns: "test.foo", details: { min: { num: MinKey }, max: { num: 4.0 }, from: "shard0001", to: "shard0000" } } |
| m30001| Mon Sep 24 23:33:46 [conn7] moveChunk request accepted at version 1|2||5061261a84ef5d5c2adfd361 |
| m30001| Mon Sep 24 23:33:46 [conn7] moveChunk number of documents: 3 |
| m30001| Mon Sep 24 23:33:46 [initandlisten] connection accepted from 127.0.0.1:49528 #8 (8 connections now open) |
| m30001| Mon Sep 24 23:33:46 [conn3] about to log metadata event: { _id: "bs-mm1.local-2012-09-25T03:33:46-2", server: "bs-mm1.local", clientAddr: "127.0.0.1:49515", time: new Date(1348544026291), what: "moveChunk.from", ns: "test.foo", details: { min: { num: MinKey }, max: { num: 4.0 }, step1 of 6: 0, note: "aborted" } } |
| m30001| Mon Sep 24 23:33:47 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 4.0 }, shardKeyPattern: { num: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30001| Mon Sep 24 23:33:48 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 4.0 }, shardKeyPattern: { num: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30000| Mon Sep 24 23:33:48 [FileAllocator] done allocating datafile /data/db/shard50/config.1, size: 128MB, took 8.781 secs |
| m30000| Mon Sep 24 23:33:48 [FileAllocator] allocating new datafile /data/db/shard50/test.ns, filling with zeroes... |
| m30001| Mon Sep 24 23:33:49 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 4.0 }, shardKeyPattern: { num: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30000| Mon Sep 24 23:33:49 [FileAllocator] done allocating datafile /data/db/shard50/test.ns, size: 16MB, took 1.03 secs |
| m30000| Mon Sep 24 23:33:49 [FileAllocator] allocating new datafile /data/db/shard50/test.0, filling with zeroes... |
| m30001| Mon Sep 24 23:33:50 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 4.0 }, shardKeyPattern: { num: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30001| Mon Sep 24 23:33:51 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 4.0 }, shardKeyPattern: { num: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30999| Mon Sep 24 23:33:52 [Balancer] creating new connection to:localhost:30000 |
| m30999| Mon Sep 24 23:33:52 BackgroundJob starting: ConnectBG |
| m30000| Mon Sep 24 23:33:52 [initandlisten] connection accepted from 127.0.0.1:49532 #17 (17 connections now open) |
| m30999| Mon Sep 24 23:33:52 [Balancer] connected connection! |
| m30001| Mon Sep 24 23:33:52 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 4.0 }, shardKeyPattern: { num: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30001| Mon Sep 24 23:33:53 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 4.0 }, shardKeyPattern: { num: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30001| Mon Sep 24 23:33:54 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 4.0 }, shardKeyPattern: { num: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30000| Mon Sep 24 23:33:54 [FileAllocator] done allocating datafile /data/db/shard50/test.0, size: 64MB, took 4.681 secs |
| m30000| Mon Sep 24 23:33:54 [FileAllocator] allocating new datafile /data/db/shard50/test.1, filling with zeroes... |
| m30000| Mon Sep 24 23:33:54 [migrateThread] build index test.foo { _id: 1 } |
| m30000| Mon Sep 24 23:33:54 [migrateThread] build index done. scanned 0 total records. 0 secs |
| m30000| Mon Sep 24 23:33:54 [migrateThread] info: creating collection test.foo on add index |
| m30000| Mon Sep 24 23:33:54 [interruptThread] now exiting |
| m30000| Mon Sep 24 23:33:54 dbexit: |
| m30000| Mon Sep 24 23:33:54 [interruptThread] shutdown: going to close listening sockets... |
| m30000| Mon Sep 24 23:33:54 [interruptThread] closing listening socket: 15 |
| m30000| Mon Sep 24 23:33:54 [interruptThread] closing listening socket: 16 |
| m30000| Mon Sep 24 23:33:54 [interruptThread] closing listening socket: 17 |
| m30000| Mon Sep 24 23:33:54 [interruptThread] removing socket file: /tmp/mongodb-30000.sock |
| m30000| Mon Sep 24 23:33:54 [interruptThread] shutdown: going to flush diaglog... |
| m30000| Mon Sep 24 23:33:54 [interruptThread] shutdown: going to close sockets... |
| m30000| Mon Sep 24 23:33:54 [interruptThread] shutdown: waiting for fs preallocator... |
| m30000| Mon Sep 24 23:33:54 [conn1] end connection 127.0.0.1:49487 (16 connections now open) |
| m30000| Mon Sep 24 23:33:54 [conn9] end connection 127.0.0.1:49511 (16 connections now open) |
| m30001| Mon Sep 24 23:33:54 [conn8] end connection 127.0.0.1:49528 (7 connections now open) |
| m30000| Mon Sep 24 23:33:54 [conn16] end connection 127.0.0.1:49527 (16 connections now open) |
| m30000| Mon Sep 24 23:33:54 [conn2] end connection 127.0.0.1:49490 (16 connections now open) |
| m30000| Mon Sep 24 23:33:54 [conn4] end connection 127.0.0.1:49493 (16 connections now open) |
| m30000| Mon Sep 24 23:33:54 [conn3] end connection 127.0.0.1:49492 (16 connections now open) |
| m30000| Mon Sep 24 23:33:54 [conn10] end connection 127.0.0.1:49516 (16 connections now open) |
| m30000| Mon Sep 24 23:33:54 [conn6] end connection 127.0.0.1:49508 (16 connections now open) |
| m30000| Mon Sep 24 23:33:54 [conn12] end connection 127.0.0.1:49519 (16 connections now open) |
| m30000| Mon Sep 24 23:33:54 [conn11] end connection 127.0.0.1:49518 (16 connections now open) |
| m30000| Mon Sep 24 23:33:54 [conn14] end connection 127.0.0.1:49525 (16 connections now open) |
| m30000| Mon Sep 24 23:33:54 [conn13] end connection 127.0.0.1:49522 (16 connections now open) |
| m30000| Mon Sep 24 23:33:54 [conn15] end connection 127.0.0.1:49526 (16 connections now open) |
| m30999| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] Socket recv() conn closed? 127.0.0.1:30000 |
| m30999| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [0] server [127.0.0.1:30000] |
| m30999| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] DBClientCursor::init call() failed |
| m30999| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('5061261484ef5d5c2adfd35e') } |
| m30999| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('5061261484ef5d5c2adfd35e') } |
| m30999| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 |
| m30999| Mon Sep 24 23:33:54 BackgroundJob starting: ConnectBG |
| m30999| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] ERROR: backgroundjob WriteBackListener-localhost:30000error: socket exception [CONNECT_ERROR] for localhost:30000 |
| m30999| Mon Sep 24 23:33:54 [Balancer] Socket recv() conn closed? 127.0.0.1:30000 |
| m30999| Mon Sep 24 23:33:54 [Balancer] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [0] server [127.0.0.1:30000] |
| m30999| Mon Sep 24 23:33:54 [Balancer] DBClientCursor::init call() failed |
| m30999| Mon Sep 24 23:33:54 [Balancer] Assertion: 13632:couldn't get updated shard list from config server |
| m30999| 0x100026d9b 0x10001fa8c 0x1002736b6 0x1001b2d20 0x1002df54b 0x1002e17b7 0x100356d43 0x7fff8a34efd6 0x7fff8a34ee89 |
| m30999| 0 mongos 0x0000000100026d9b _ZN5mongo15printStackTraceERSo + 43 |
| m30999| 1 mongos 0x000000010001fa8c _ZN5mongo11msgassertedEiPKc + 204 |
| m30999| 2 mongos 0x00000001002736b6 _ZN5mongo15StaticShardInfo6reloadEv + 4118 |
| m30999| 3 mongos 0x00000001001b2d20 _ZN5mongo8Balancer3runEv + 880 |
| m30999| 4 mongos 0x00000001002df54b _ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE + 187 |
| m30999| 5 mongos 0x00000001002e17b7 _ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv + 119 |
| m30999| 6 mongos 0x0000000100356d43 thread_proxy + 163 |
| m30999| 7 libSystem.B.dylib 0x00007fff8a34efd6 _pthread_start + 331 |
| m30999| 8 libSystem.B.dylib 0x00007fff8a34ee89 thread_start + 13 |
| m30999| Mon Sep 24 23:33:54 [Balancer] scoped connection to localhost:30000 not being returned to the pool |
| m30999| Mon Sep 24 23:33:54 [Balancer] caught exception while doing balance: couldn't get updated shard list from config server |
| m30999| Mon Sep 24 23:33:54 [Balancer] *** End of balancing round |
| m30998| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] Socket recv() conn closed? 127.0.0.1:30000 |
| m30998| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [0] server [127.0.0.1:30000] |
| m30998| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] DBClientCursor::init call() failed |
| m30998| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('50612614c475b0df672ac7a8') } |
| m30998| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('50612614c475b0df672ac7a8') } |
| m30998| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] Socket recv() errno:54 Connection reset by peer 127.0.0.1:30000 |
| m30998| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [1] server [127.0.0.1:30000] |
| m30998| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] DBClientCursor::init call() failed |
| m30998| Mon Sep 24 23:33:54 [WriteBackListener-localhost:30000] Assertion: 13632:couldn't get updated shard list from config server |
| m30998| 0x100026d9b 0x10001fa8c 0x1002736b6 0x10029fcf5 0x1002df54b 0x1002e17b7 0x100356d43 0x7fff8a34efd6 0x7fff8a34ee89 |
| m30998| 0 mongos 0x0000000100026d9b _ZN5mongo15printStackTraceERSo + 43 |
| m30998| 1 mongos 0x000000010001fa8c _ZN5mongo11msgassertedEiPKc + 204 |
| m30998| 2 mongos 0x00000001002736b6 _ZN5mongo15StaticShardInfo6reloadEv + 4118 |
| m30998| 3 mongos 0x000000010029fcf5 _ZN5mongo17WriteBackListener3runEv + 16373 |
| m30998| 4 mongos 0x00000001002df54b _ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE + 187 |
| m30998| 5 mongos 0x00000001002e17b7 _ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv + 119 |
| m30998| 6 mongos 0x0000000100356d43 thread_proxy + 163 |
| m30998| 7 libSystem.B.dylib 0x00007fff8a34efd6 _pthread_start + 331 |
2012-09-24 23:34:04 EDT | m30998| 8 libSystem.B.dylib 0x00007fff8a34ee89 thread_start |
| Mon Sep 24 23:34:04 got signal 15 (Terminated), will terminate after current cmd ends |
| Mon Sep 24 23:34:04 [interruptThread] now exiting |
| Mon Sep 24 23:34:04 dbexit: |
| Mon Sep 24 23:34:04 [interruptThread] shutdown: going to close listening sockets... |
| Mon Sep 24 23:34:04 [interruptThread] closing listening socket: 6 |
| Mon Sep 24 23:34:04 [interruptThread] closing listening socket: 7 |
| Mon Sep 24 23:34:04 [interruptThread] closing listening socket: 9 |
| Mon Sep 24 23:34:04 [interruptThread] removing socket file: /tmp/mongodb-27999.sock |
| Mon Sep 24 23:34:04 [interruptThread] shutdown: going to flush diaglog... |
| Mon Sep 24 23:34:04 [interruptThread] shutdown: going to close sockets... |
| Mon Sep 24 23:34:04 [interruptThread] shutdown: waiting for fs preallocator... |
| Mon Sep 24 23:34:04 [interruptThread] shutdown: closing all files... |
| Mon Sep 24 23:34:04 [interruptThread] closeAllFiles() finished |
| Mon Sep 24 23:34:04 [interruptThread] shutdown: removing fs lock... |
| Mon Sep 24 23:34:04 [conn81] end connection 127.0.0.1:49234 (4 connections now open) |
| Mon Sep 24 23:34:04 [conn85] end connection 127.0.0.1:49479 (4 connections now open) |
| Mon Sep 24 23:34:04 [conn84] end connection 127.0.0.1:49407 (4 connections now open) |
| Mon Sep 24 23:34:04 [conn83] end connection 127.0.0.1:49337 (4 connections now open) |
| Mon Sep 24 23:34:04 [conn82] end connection 127.0.0.1:49279 (4 connections now open) |
| Mon Sep 24 23:34:04 dbexit: really exiting now |