cwd [/home/gregorv/Workspaces/10Gen/mongo] num procs:222 killing: 30191 pts/8 S+ 0:00 vi src/mongo/shell/replsettest.js num procs:221 removing: /data/db/sconsTests//local.ns removing: /data/db/sconsTests//mongod.lock removing: /data/db/sconsTests//local.0 Thu Jan 17 16:43:35.266 [initandlisten] MongoDB starting : pid=30204 port=27999 dbpath=/data/db/sconsTests/ 64-bit host=nuwen Thu Jan 17 16:43:35.266 [initandlisten] Thu Jan 17 16:43:35.266 [initandlisten] ** NOTE: This is a development version (2.3.3-pre-) of MongoDB. Thu Jan 17 16:43:35.266 [initandlisten] ** Not recommended for production. Thu Jan 17 16:43:35.266 [initandlisten] Thu Jan 17 16:43:35.266 [initandlisten] db version v2.3.3-pre-, pdfile version 4.5 Thu Jan 17 16:43:35.266 [initandlisten] git version: b8c0b6ae71fb6c076101fa25ef915021ea26e156 Thu Jan 17 16:43:35.266 [initandlisten] build info: Linux nuwen 3.5.0-21-generic #32-Ubuntu SMP Tue Dec 11 18:51:59 UTC 2012 x86_64 BOOST_LIB_VERSION=1_49 Thu Jan 17 16:43:35.266 [initandlisten] allocator: tcmalloc Thu Jan 17 16:43:35.266 [initandlisten] options: { dbpath: "/data/db/sconsTests/", nopreallocj: true, port: 27999, setParameter: [ "enableTestCommands=1" ] } Thu Jan 17 16:43:35.323 [initandlisten] journal dir=/data/db/sconsTests/journal Thu Jan 17 16:43:35.323 [initandlisten] recover : no journal files present, no recovery needed Thu Jan 17 16:43:35.374 [FileAllocator] allocating new datafile /data/db/sconsTests/local.ns, filling with zeroes... Thu Jan 17 16:43:35.374 [FileAllocator] creating directory /data/db/sconsTests/_tmp Thu Jan 17 16:43:35.557 [FileAllocator] done allocating datafile /data/db/sconsTests/local.ns, size: 16MB, took 0.15 secs Thu Jan 17 16:43:35.557 [FileAllocator] allocating new datafile /data/db/sconsTests/local.0, filling with zeroes... Thu Jan 17 16:43:36.073 [FileAllocator] done allocating datafile /data/db/sconsTests/local.0, size: 64MB, took 0.515 secs Thu Jan 17 16:43:36.074 [initandlisten] command local.$cmd command: { create: "startup_log", size: 10485760, capped: true } ntoreturn:1 keyUpdates:0 reslen:37 700ms Thu Jan 17 16:43:36.075 [initandlisten] waiting for connections on port 27999 Thu Jan 17 16:43:36.075 [websvr] admin web console waiting for connections on port 28999 running /home/gregorv/Workspaces/10Gen/mongo/mongod --port 27999 --dbpath /data/db/sconsTests/ --setParameter enableTestCommands=1 --nopreallocj ******************************************* Test : sync_change_source.js ... Command : /home/gregorv/Workspaces/10Gen/mongo/mongo --port 27999 --authenticationMechanism MONGO-CR /home/gregorv/Workspaces/10Gen/mongo/jstests/replsets/sync_change_source.js --eval TestData = new Object();TestData.testPath = "/home/gregorv/Workspaces/10Gen/mongo/jstests/replsets/sync_change_source.js";TestData.testFile = "sync_change_source.js";TestData.testName = "sync_change_source";TestData.noJournal = false;TestData.noJournalPrealloc = true;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Thu Jan 17 16:43:36 2013 MongoDB shell version: 2.3.3-pre- connecting to: 127.0.0.1:27999/test Thu Jan 17 16:43:36.275 [initandlisten] connection accepted from 127.0.0.1:56639 #2 (1 connection now open) null Running: sudo bash -c tc qdisc del root dev lo > /tmp/output Thu Jan 17 16:43:36.279 shell: started program sudo bash -c tc qdisc del root dev lo > /tmp/output Running: sudo bash -c tc qdisc add dev lo handle 1: root htb > /tmp/output Thu Jan 17 16:44:11.057 shell: started program sudo bash -c tc qdisc add dev lo handle 1: root htb > /tmp/output Running: sudo bash -c tc class add dev lo parent 1: classid 1:1 htb rate 1000Mbps > /tmp/output Thu Jan 17 16:44:11.072 shell: started program sudo bash -c tc class add dev lo parent 1: classid 1:1 htb rate 1000Mbps > /tmp/output Running: sudo bash -c tc class add dev lo parent 1:1 classid 1:10 htb rate 1bps ceil 1000Mbps > /tmp/output Thu Jan 17 16:44:11.086 shell: started program sudo bash -c tc class add dev lo parent 1:1 classid 1:10 htb rate 1bps ceil 1000Mbps > /tmp/output ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31000, 31001, 31002, 31003 ] 31000 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31000, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "testReplSet", "dbpath" : "$set-$node", "useIP" : true, "restart" : undefined, "pathOpts" : { "node" : 0, "set" : "testReplSet" } } ReplSetTest Starting.... Using Local IP for host: 127.0.0.2 Resetting db path '/data/db/testReplSet-0' Thu Jan 17 16:44:11.146 shell: started program /home/gregorv/Workspaces/10Gen/mongo/mongod --oplogSize 40 --port 31000 --noprealloc --smallfiles --rest --replSet testReplSet --dbpath /data/db/testReplSet-0 --nopreallocj --bind_ip 127.0.0.2 --setParameter enableTestCommands=1 m31000| note: noprealloc may hurt performance in many applications m31000| Thu Jan 17 16:44:11.176 [initandlisten] MongoDB starting : pid=30249 port=31000 dbpath=/data/db/testReplSet-0 64-bit host=nuwen m31000| Thu Jan 17 16:44:11.176 [initandlisten] m31000| Thu Jan 17 16:44:11.176 [initandlisten] ** NOTE: This is a development version (2.3.3-pre-) of MongoDB. m31000| Thu Jan 17 16:44:11.176 [initandlisten] ** Not recommended for production. m31000| Thu Jan 17 16:44:11.176 [initandlisten] m31000| Thu Jan 17 16:44:11.176 [initandlisten] db version v2.3.3-pre-, pdfile version 4.5 m31000| Thu Jan 17 16:44:11.176 [initandlisten] git version: b8c0b6ae71fb6c076101fa25ef915021ea26e156 m31000| Thu Jan 17 16:44:11.176 [initandlisten] build info: Linux nuwen 3.5.0-21-generic #32-Ubuntu SMP Tue Dec 11 18:51:59 UTC 2012 x86_64 BOOST_LIB_VERSION=1_49 m31000| Thu Jan 17 16:44:11.176 [initandlisten] allocator: tcmalloc m31000| Thu Jan 17 16:44:11.176 [initandlisten] options: { bind_ip: "127.0.0.2", dbpath: "/data/db/testReplSet-0", noprealloc: true, nopreallocj: true, oplogSize: 40, port: 31000, replSet: "testReplSet", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31000| Thu Jan 17 16:44:11.254 [initandlisten] journal dir=/data/db/testReplSet-0/journal m31000| Thu Jan 17 16:44:11.255 [initandlisten] recover : no journal files present, no recovery needed m31000| Thu Jan 17 16:44:11.306 [FileAllocator] allocating new datafile /data/db/testReplSet-0/local.ns, filling with zeroes... m31000| Thu Jan 17 16:44:11.306 [FileAllocator] creating directory /data/db/testReplSet-0/_tmp m31000| Thu Jan 17 16:44:11.481 [FileAllocator] done allocating datafile /data/db/testReplSet-0/local.ns, size: 16MB, took 0.141 secs m31000| Thu Jan 17 16:44:11.481 [FileAllocator] allocating new datafile /data/db/testReplSet-0/local.0, filling with zeroes... m31000| Thu Jan 17 16:44:11.623 [FileAllocator] done allocating datafile /data/db/testReplSet-0/local.0, size: 16MB, took 0.141 secs m31000| Thu Jan 17 16:44:11.624 [initandlisten] command local.$cmd command: { create: "startup_log", size: 10485760, capped: true } ntoreturn:1 keyUpdates:0 reslen:37 317ms m31000| Thu Jan 17 16:44:11.624 [initandlisten] waiting for connections on port 31000 m31000| Thu Jan 17 16:44:11.624 [websvr] admin web console waiting for connections on port 32000 m31000| Thu Jan 17 16:44:11.626 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31000| Thu Jan 17 16:44:11.626 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31000| Thu Jan 17 16:44:11.748 [initandlisten] connection accepted from 127.0.0.1:56353 #1 (1 connection now open) [ connection to 127.0.0.2:31000 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31000, 31001, 31002, 31003 ] 31001 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31001, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "testReplSet", "dbpath" : "$set-$node", "useIP" : true, "restart" : undefined, "pathOpts" : { "node" : 1, "set" : "testReplSet" } } ReplSetTest Starting.... Using Local IP for host: 127.0.0.3 Resetting db path '/data/db/testReplSet-1' Thu Jan 17 16:44:11.781 shell: started program /home/gregorv/Workspaces/10Gen/mongo/mongod --oplogSize 40 --port 31001 --noprealloc --smallfiles --rest --replSet testReplSet --dbpath /data/db/testReplSet-1 --nopreallocj --bind_ip 127.0.0.3 --setParameter enableTestCommands=1 m31001| note: noprealloc may hurt performance in many applications m31001| Thu Jan 17 16:44:11.807 [initandlisten] MongoDB starting : pid=30300 port=31001 dbpath=/data/db/testReplSet-1 64-bit host=nuwen m31001| Thu Jan 17 16:44:11.807 [initandlisten] m31001| Thu Jan 17 16:44:11.807 [initandlisten] ** NOTE: This is a development version (2.3.3-pre-) of MongoDB. m31001| Thu Jan 17 16:44:11.807 [initandlisten] ** Not recommended for production. m31001| Thu Jan 17 16:44:11.807 [initandlisten] m31001| Thu Jan 17 16:44:11.807 [initandlisten] db version v2.3.3-pre-, pdfile version 4.5 m31001| Thu Jan 17 16:44:11.807 [initandlisten] git version: b8c0b6ae71fb6c076101fa25ef915021ea26e156 m31001| Thu Jan 17 16:44:11.807 [initandlisten] build info: Linux nuwen 3.5.0-21-generic #32-Ubuntu SMP Tue Dec 11 18:51:59 UTC 2012 x86_64 BOOST_LIB_VERSION=1_49 m31001| Thu Jan 17 16:44:11.807 [initandlisten] allocator: tcmalloc m31001| Thu Jan 17 16:44:11.807 [initandlisten] options: { bind_ip: "127.0.0.3", dbpath: "/data/db/testReplSet-1", noprealloc: true, nopreallocj: true, oplogSize: 40, port: 31001, replSet: "testReplSet", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31001| Thu Jan 17 16:44:11.865 [initandlisten] journal dir=/data/db/testReplSet-1/journal m31001| Thu Jan 17 16:44:11.865 [initandlisten] recover : no journal files present, no recovery needed m31001| Thu Jan 17 16:44:11.908 [FileAllocator] allocating new datafile /data/db/testReplSet-1/local.ns, filling with zeroes... m31001| Thu Jan 17 16:44:11.908 [FileAllocator] creating directory /data/db/testReplSet-1/_tmp m31001| Thu Jan 17 16:44:12.082 [FileAllocator] done allocating datafile /data/db/testReplSet-1/local.ns, size: 16MB, took 0.141 secs m31001| Thu Jan 17 16:44:12.083 [FileAllocator] allocating new datafile /data/db/testReplSet-1/local.0, filling with zeroes... m31001| Thu Jan 17 16:44:12.224 [FileAllocator] done allocating datafile /data/db/testReplSet-1/local.0, size: 16MB, took 0.141 secs m31001| Thu Jan 17 16:44:12.225 [initandlisten] command local.$cmd command: { create: "startup_log", size: 10485760, capped: true } ntoreturn:1 keyUpdates:0 reslen:37 317ms m31001| Thu Jan 17 16:44:12.225 [initandlisten] waiting for connections on port 31001 m31001| Thu Jan 17 16:44:12.226 [websvr] admin web console waiting for connections on port 32001 m31001| Thu Jan 17 16:44:12.227 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31001| Thu Jan 17 16:44:12.227 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31001| Thu Jan 17 16:44:12.383 [initandlisten] connection accepted from 127.0.0.1:59473 #1 (1 connection now open) [ connection to 127.0.0.2:31000, connection to 127.0.0.3:31001 ] ReplSetTest n is : 2 ReplSetTest n: 2 ports: [ 31000, 31001, 31002, 31003 ] 31002 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31002, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "testReplSet", "dbpath" : "$set-$node", "useIP" : true, "restart" : undefined, "pathOpts" : { "node" : 2, "set" : "testReplSet" } } ReplSetTest Starting.... Using Local IP for host: 127.0.0.4 Resetting db path '/data/db/testReplSet-2' Thu Jan 17 16:44:12.417 shell: started program /home/gregorv/Workspaces/10Gen/mongo/mongod --oplogSize 40 --port 31002 --noprealloc --smallfiles --rest --replSet testReplSet --dbpath /data/db/testReplSet-2 --nopreallocj --bind_ip 127.0.0.4 --setParameter enableTestCommands=1 m31002| note: noprealloc may hurt performance in many applications m31002| Thu Jan 17 16:44:12.442 [initandlisten] MongoDB starting : pid=30351 port=31002 dbpath=/data/db/testReplSet-2 64-bit host=nuwen m31002| Thu Jan 17 16:44:12.443 [initandlisten] m31002| Thu Jan 17 16:44:12.443 [initandlisten] ** NOTE: This is a development version (2.3.3-pre-) of MongoDB. m31002| Thu Jan 17 16:44:12.443 [initandlisten] ** Not recommended for production. m31002| Thu Jan 17 16:44:12.443 [initandlisten] m31002| Thu Jan 17 16:44:12.443 [initandlisten] db version v2.3.3-pre-, pdfile version 4.5 m31002| Thu Jan 17 16:44:12.443 [initandlisten] git version: b8c0b6ae71fb6c076101fa25ef915021ea26e156 m31002| Thu Jan 17 16:44:12.443 [initandlisten] build info: Linux nuwen 3.5.0-21-generic #32-Ubuntu SMP Tue Dec 11 18:51:59 UTC 2012 x86_64 BOOST_LIB_VERSION=1_49 m31002| Thu Jan 17 16:44:12.443 [initandlisten] allocator: tcmalloc m31002| Thu Jan 17 16:44:12.443 [initandlisten] options: { bind_ip: "127.0.0.4", dbpath: "/data/db/testReplSet-2", noprealloc: true, nopreallocj: true, oplogSize: 40, port: 31002, replSet: "testReplSet", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31002| Thu Jan 17 16:44:12.512 [initandlisten] journal dir=/data/db/testReplSet-2/journal m31002| Thu Jan 17 16:44:12.512 [initandlisten] recover : no journal files present, no recovery needed m31002| Thu Jan 17 16:44:12.563 [FileAllocator] allocating new datafile /data/db/testReplSet-2/local.ns, filling with zeroes... m31002| Thu Jan 17 16:44:12.563 [FileAllocator] creating directory /data/db/testReplSet-2/_tmp m31002| Thu Jan 17 16:44:12.746 [FileAllocator] done allocating datafile /data/db/testReplSet-2/local.ns, size: 16MB, took 0.15 secs m31002| Thu Jan 17 16:44:12.746 [FileAllocator] allocating new datafile /data/db/testReplSet-2/local.0, filling with zeroes... m31002| Thu Jan 17 16:44:12.888 [FileAllocator] done allocating datafile /data/db/testReplSet-2/local.0, size: 16MB, took 0.141 secs m31002| Thu Jan 17 16:44:12.889 [initandlisten] command local.$cmd command: { create: "startup_log", size: 10485760, capped: true } ntoreturn:1 keyUpdates:0 reslen:37 326ms m31002| Thu Jan 17 16:44:12.889 [initandlisten] waiting for connections on port 31002 m31002| Thu Jan 17 16:44:12.889 [websvr] admin web console waiting for connections on port 32002 m31002| Thu Jan 17 16:44:12.890 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31002| Thu Jan 17 16:44:12.890 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31002| Thu Jan 17 16:44:13.019 [initandlisten] connection accepted from 127.0.0.1:60376 #1 (1 connection now open) [ connection to 127.0.0.2:31000, connection to 127.0.0.3:31001, connection to 127.0.0.4:31002 ] ReplSetTest n is : 3 ReplSetTest n: 3 ports: [ 31000, 31001, 31002, 31003 ] 31003 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31003, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "testReplSet", "dbpath" : "$set-$node", "useIP" : true, "restart" : undefined, "pathOpts" : { "node" : 3, "set" : "testReplSet" } } ReplSetTest Starting.... Using Local IP for host: 127.0.0.5 Resetting db path '/data/db/testReplSet-3' Thu Jan 17 16:44:13.054 shell: started program /home/gregorv/Workspaces/10Gen/mongo/mongod --oplogSize 40 --port 31003 --noprealloc --smallfiles --rest --replSet testReplSet --dbpath /data/db/testReplSet-3 --nopreallocj --bind_ip 127.0.0.5 --setParameter enableTestCommands=1 m31003| note: noprealloc may hurt performance in many applications m31003| Thu Jan 17 16:44:13.077 [initandlisten] MongoDB starting : pid=30403 port=31003 dbpath=/data/db/testReplSet-3 64-bit host=nuwen m31003| Thu Jan 17 16:44:13.077 [initandlisten] m31003| Thu Jan 17 16:44:13.077 [initandlisten] ** NOTE: This is a development version (2.3.3-pre-) of MongoDB. m31003| Thu Jan 17 16:44:13.077 [initandlisten] ** Not recommended for production. m31003| Thu Jan 17 16:44:13.077 [initandlisten] m31003| Thu Jan 17 16:44:13.077 [initandlisten] db version v2.3.3-pre-, pdfile version 4.5 m31003| Thu Jan 17 16:44:13.077 [initandlisten] git version: b8c0b6ae71fb6c076101fa25ef915021ea26e156 m31003| Thu Jan 17 16:44:13.077 [initandlisten] build info: Linux nuwen 3.5.0-21-generic #32-Ubuntu SMP Tue Dec 11 18:51:59 UTC 2012 x86_64 BOOST_LIB_VERSION=1_49 m31003| Thu Jan 17 16:44:13.077 [initandlisten] allocator: tcmalloc m31003| Thu Jan 17 16:44:13.077 [initandlisten] options: { bind_ip: "127.0.0.5", dbpath: "/data/db/testReplSet-3", noprealloc: true, nopreallocj: true, oplogSize: 40, port: 31003, replSet: "testReplSet", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31003| Thu Jan 17 16:44:13.130 [initandlisten] journal dir=/data/db/testReplSet-3/journal m31003| Thu Jan 17 16:44:13.130 [initandlisten] recover : no journal files present, no recovery needed m31003| Thu Jan 17 16:44:13.181 [FileAllocator] allocating new datafile /data/db/testReplSet-3/local.ns, filling with zeroes... m31003| Thu Jan 17 16:44:13.181 [FileAllocator] creating directory /data/db/testReplSet-3/_tmp m31003| Thu Jan 17 16:44:13.356 [FileAllocator] done allocating datafile /data/db/testReplSet-3/local.ns, size: 16MB, took 0.141 secs m31003| Thu Jan 17 16:44:13.356 [FileAllocator] allocating new datafile /data/db/testReplSet-3/local.0, filling with zeroes... m31003| Thu Jan 17 16:44:13.497 [FileAllocator] done allocating datafile /data/db/testReplSet-3/local.0, size: 16MB, took 0.141 secs m31003| Thu Jan 17 16:44:13.498 [initandlisten] command local.$cmd command: { create: "startup_log", size: 10485760, capped: true } ntoreturn:1 keyUpdates:0 reslen:37 317ms m31003| Thu Jan 17 16:44:13.499 [initandlisten] waiting for connections on port 31003 m31003| Thu Jan 17 16:44:13.499 [websvr] admin web console waiting for connections on port 32003 m31003| Thu Jan 17 16:44:13.501 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31003| Thu Jan 17 16:44:13.501 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31003| Thu Jan 17 16:44:13.655 [initandlisten] connection accepted from 127.0.0.1:49564 #1 (1 connection now open) [ connection to 127.0.0.2:31000, connection to 127.0.0.3:31001, connection to 127.0.0.4:31002, connection to 127.0.0.5:31003 ] { "replSetInitiate" : { "_id" : "testReplSet", "members" : [ { "_id" : 0, "host" : "127.0.0.2:31000" }, { "_id" : 1, "host" : "127.0.0.3:31001" }, { "_id" : 2, "host" : "127.0.0.4:31002" }, { "_id" : 3, "host" : "127.0.0.5:31003" } ] } } m31000| Thu Jan 17 16:44:13.658 [conn1] replSet replSetInitiate admin command received from client m31000| Thu Jan 17 16:44:13.658 [conn1] replSet replSetInitiate config object parses ok, 4 members specified m31001| Thu Jan 17 16:44:13.659 [initandlisten] connection accepted from 127.0.0.2:60528 #2 (2 connections now open) m31002| Thu Jan 17 16:44:13.660 [initandlisten] connection accepted from 127.0.0.2:38393 #2 (2 connections now open) m31003| Thu Jan 17 16:44:13.660 [initandlisten] connection accepted from 127.0.0.2:35613 #2 (2 connections now open) m31000| Thu Jan 17 16:44:13.661 [conn1] replSet replSetInitiate all members seem up m31000| Thu Jan 17 16:44:13.661 [conn1] ****** m31000| Thu Jan 17 16:44:13.661 [conn1] creating replication oplog of size: 40MB... m31000| Thu Jan 17 16:44:13.661 [FileAllocator] allocating new datafile /data/db/testReplSet-0/local.1, filling with zeroes... m31000| Thu Jan 17 16:44:14.181 [FileAllocator] done allocating datafile /data/db/testReplSet-0/local.1, size: 64MB, took 0.518 secs m31000| Thu Jan 17 16:44:15.280 [conn1] ****** m31000| Thu Jan 17 16:44:15.280 [conn1] replSet info saving a newer config version to local.system.replset m31000| Thu Jan 17 16:44:15.282 [conn1] replSet saveConfigLocally done m31000| Thu Jan 17 16:44:15.282 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. m31000| Thu Jan 17 16:44:15.282 [conn1] build index local.replset.minvalid { _id: 1 } m31000| Thu Jan 17 16:44:15.283 [conn1] build index done. scanned 0 total records. 0 secs m31000| Thu Jan 17 16:44:15.283 [conn1] command admin.$cmd command: { replSetInitiate: { _id: "testReplSet", members: [ { _id: 0.0, host: "127.0.0.2:31000" }, { _id: 1.0, host: "127.0.0.3:31001" }, { _id: 2.0, host: "127.0.0.4:31002" }, { _id: 3.0, host: "127.0.0.5:31003" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:1622244 reslen:112 1625ms { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31000| Thu Jan 17 16:44:21.626 [rsStart] replSet I am 127.0.0.2:31000 m31000| Thu Jan 17 16:44:21.627 [rsStart] replSet STARTUP2 m31000| Thu Jan 17 16:44:21.627 [rsHealthPoll] replSet member 127.0.0.5:31003 is up m31001| Thu Jan 17 16:44:21.627 [conn2] end connection 127.0.0.2:60528 (1 connection now open) m31000| Thu Jan 17 16:44:21.627 [rsHealthPoll] replSet member 127.0.0.4:31002 is up m31001| Thu Jan 17 16:44:21.627 [initandlisten] connection accepted from 127.0.0.2:56485 #3 (2 connections now open) m31000| Thu Jan 17 16:44:21.627 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31000| Thu Jan 17 16:44:21.627 [rsHealthPoll] replSet member 127.0.0.3:31001 is up m31001| Thu Jan 17 16:44:22.227 [rsStart] trying to contact 127.0.0.2:31000 m31000| Thu Jan 17 16:44:22.227 [initandlisten] connection accepted from 127.0.0.3:38772 #2 (2 connections now open) m31001| Thu Jan 17 16:44:22.228 [rsStart] replSet I am 127.0.0.3:31001 m31001| Thu Jan 17 16:44:22.228 [rsStart] replSet got config version 1 from a remote, saving locally m31001| Thu Jan 17 16:44:22.228 [rsStart] replSet info saving a newer config version to local.system.replset m31001| Thu Jan 17 16:44:22.442 [rsStart] replSet saveConfigLocally done m31001| Thu Jan 17 16:44:22.442 [rsStart] replSet STARTUP2 m31001| Thu Jan 17 16:44:22.442 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31001| Thu Jan 17 16:44:22.442 [rsSync] ****** m31001| Thu Jan 17 16:44:22.442 [rsSync] creating replication oplog of size: 40MB... m31001| Thu Jan 17 16:44:22.466 [FileAllocator] allocating new datafile /data/db/testReplSet-1/local.1, filling with zeroes... m31000| Thu Jan 17 16:44:22.627 [rsSync] replSet SECONDARY m31002| Thu Jan 17 16:44:22.891 [rsStart] trying to contact 127.0.0.2:31000 m31000| Thu Jan 17 16:44:22.891 [initandlisten] connection accepted from 127.0.0.4:35119 #3 (3 connections now open) m31002| Thu Jan 17 16:44:22.892 [rsStart] replSet I am 127.0.0.4:31002 m31002| Thu Jan 17 16:44:22.892 [rsStart] replSet got config version 1 from a remote, saving locally m31002| Thu Jan 17 16:44:22.892 [rsStart] replSet info saving a newer config version to local.system.replset m31001| Thu Jan 17 16:44:23.099 [FileAllocator] done allocating datafile /data/db/testReplSet-1/local.1, size: 64MB, took 0.632 secs m31002| Thu Jan 17 16:44:23.100 [rsStart] replSet saveConfigLocally done m31002| Thu Jan 17 16:44:23.100 [rsStart] replSet STARTUP2 m31002| Thu Jan 17 16:44:23.100 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31002| Thu Jan 17 16:44:23.101 [rsSync] ****** m31002| Thu Jan 17 16:44:23.101 [rsSync] creating replication oplog of size: 40MB... m31002| Thu Jan 17 16:44:23.317 [FileAllocator] allocating new datafile /data/db/testReplSet-2/local.1, filling with zeroes... m31003| Thu Jan 17 16:44:23.501 [rsStart] trying to contact 127.0.0.2:31000 m31000| Thu Jan 17 16:44:23.502 [initandlisten] connection accepted from 127.0.0.5:39186 #4 (4 connections now open) m31003| Thu Jan 17 16:44:23.502 [rsStart] replSet I am 127.0.0.5:31003 m31003| Thu Jan 17 16:44:23.502 [rsStart] replSet got config version 1 from a remote, saving locally m31003| Thu Jan 17 16:44:23.502 [rsStart] replSet info saving a newer config version to local.system.replset m31000| Thu Jan 17 16:44:23.627 [rsHealthPoll] replset info 127.0.0.4:31002 thinks that we are down m31000| Thu Jan 17 16:44:23.627 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state STARTUP2 m31000| Thu Jan 17 16:44:23.628 [rsHealthPoll] replset info 127.0.0.3:31001 thinks that we are down m31000| Thu Jan 17 16:44:23.628 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state STARTUP2 m31000| Thu Jan 17 16:44:23.628 [rsMgr] not electing self, 127.0.0.4:31002 would veto with 'I don't think 127.0.0.2:31000 is electable' m31000| Thu Jan 17 16:44:23.628 [rsMgr] not electing self, 127.0.0.4:31002 would veto with 'I don't think 127.0.0.2:31000 is electable' m31001| Thu Jan 17 16:44:24.141 [rsSync] ****** m31001| Thu Jan 17 16:44:24.141 [rsSync] replSet initial sync pending m31001| Thu Jan 17 16:44:24.141 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31002| Thu Jan 17 16:44:24.141 [FileAllocator] done allocating datafile /data/db/testReplSet-2/local.1, size: 64MB, took 0.823 secs m31003| Thu Jan 17 16:44:24.155 [rsStart] replSet saveConfigLocally done m31003| Thu Jan 17 16:44:24.155 [rsStart] replSet STARTUP2 m31003| Thu Jan 17 16:44:24.155 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31003| Thu Jan 17 16:44:24.156 [rsSync] ****** m31003| Thu Jan 17 16:44:24.156 [rsSync] creating replication oplog of size: 40MB... m31000| Thu Jan 17 16:44:24.229 [conn2] end connection 127.0.0.3:38772 (3 connections now open) m31003| Thu Jan 17 16:44:24.229 [initandlisten] connection accepted from 127.0.0.3:34297 #3 (3 connections now open) m31002| Thu Jan 17 16:44:24.229 [initandlisten] connection accepted from 127.0.0.3:54117 #3 (3 connections now open) m31000| Thu Jan 17 16:44:24.229 [initandlisten] connection accepted from 127.0.0.3:43578 #5 (5 connections now open) m31001| Thu Jan 17 16:44:24.229 [rsHealthPoll] replSet member 127.0.0.2:31000 is up m31001| Thu Jan 17 16:44:24.229 [rsHealthPoll] replset info 127.0.0.4:31002 thinks that we are down m31001| Thu Jan 17 16:44:24.229 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state SECONDARY m31001| Thu Jan 17 16:44:24.229 [rsHealthPoll] replSet member 127.0.0.4:31002 is up m31001| Thu Jan 17 16:44:24.229 [rsHealthPoll] replset info 127.0.0.5:31003 thinks that we are down m31001| Thu Jan 17 16:44:24.229 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state STARTUP2 m31001| Thu Jan 17 16:44:24.229 [rsHealthPoll] replSet member 127.0.0.5:31003 is up m31001| Thu Jan 17 16:44:24.229 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state STARTUP2 m31003| Thu Jan 17 16:44:24.391 [FileAllocator] allocating new datafile /data/db/testReplSet-3/local.1, filling with zeroes... m31000| Thu Jan 17 16:44:24.892 [conn3] end connection 127.0.0.4:35119 (3 connections now open) m31000| Thu Jan 17 16:44:24.892 [initandlisten] connection accepted from 127.0.0.4:53212 #6 (4 connections now open) m31003| Thu Jan 17 16:44:24.892 [initandlisten] connection accepted from 127.0.0.4:54116 #4 (4 connections now open) m31001| Thu Jan 17 16:44:24.892 [initandlisten] connection accepted from 127.0.0.4:57631 #4 (3 connections now open) m31002| Thu Jan 17 16:44:24.892 [rsHealthPoll] replSet member 127.0.0.3:31001 is up m31002| Thu Jan 17 16:44:24.892 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state STARTUP2 m31002| Thu Jan 17 16:44:24.892 [rsHealthPoll] replSet member 127.0.0.2:31000 is up m31002| Thu Jan 17 16:44:24.892 [rsHealthPoll] replset info 127.0.0.5:31003 thinks that we are down m31002| Thu Jan 17 16:44:24.892 [rsHealthPoll] replSet member 127.0.0.5:31003 is up m31002| Thu Jan 17 16:44:24.892 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state SECONDARY m31002| Thu Jan 17 16:44:24.892 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state STARTUP2 m31002| Thu Jan 17 16:44:25.033 [rsSync] ****** m31002| Thu Jan 17 16:44:25.033 [rsSync] replSet initial sync pending m31002| Thu Jan 17 16:44:25.033 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31003| Thu Jan 17 16:44:25.033 [FileAllocator] done allocating datafile /data/db/testReplSet-3/local.1, size: 64MB, took 0.641 secs m31000| Thu Jan 17 16:44:25.502 [conn4] end connection 127.0.0.5:39186 (3 connections now open) m31001| Thu Jan 17 16:44:25.503 [initandlisten] connection accepted from 127.0.0.5:55339 #5 (4 connections now open) m31000| Thu Jan 17 16:44:25.503 [initandlisten] connection accepted from 127.0.0.5:46279 #7 (5 connections now open) m31002| Thu Jan 17 16:44:25.503 [initandlisten] connection accepted from 127.0.0.5:59512 #4 (4 connections now open) m31003| Thu Jan 17 16:44:25.503 [rsHealthPoll] replSet member 127.0.0.3:31001 is up m31003| Thu Jan 17 16:44:25.503 [rsHealthPoll] replSet member 127.0.0.2:31000 is up m31003| Thu Jan 17 16:44:25.503 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state SECONDARY m31003| Thu Jan 17 16:44:25.503 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state STARTUP2 m31003| Thu Jan 17 16:44:25.503 [rsHealthPoll] replSet member 127.0.0.4:31002 is up m31003| Thu Jan 17 16:44:25.503 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state STARTUP2 m31000| Thu Jan 17 16:44:25.627 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state STARTUP2 m31000| Thu Jan 17 16:44:25.628 [rsMgr] not electing self, 127.0.0.5:31003 would veto with 'I don't think 127.0.0.2:31000 is electable' m31003| Thu Jan 17 16:44:25.925 [rsSync] ****** m31003| Thu Jan 17 16:44:25.925 [rsSync] replSet initial sync pending m31003| Thu Jan 17 16:44:25.925 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31000| Thu Jan 17 16:44:31.629 [rsMgr] replSet info electSelf 0 m31001| Thu Jan 17 16:44:31.629 [conn3] replSet RECOVERING m31001| Thu Jan 17 16:44:31.630 [conn3] replSet info voting yea for 127.0.0.2:31000 (0) m31003| Thu Jan 17 16:44:31.630 [conn2] replSet RECOVERING m31003| Thu Jan 17 16:44:31.630 [conn2] replSet info voting yea for 127.0.0.2:31000 (0) m31002| Thu Jan 17 16:44:31.629 [conn2] replSet RECOVERING m31002| Thu Jan 17 16:44:31.630 [conn2] replSet info voting yea for 127.0.0.2:31000 (0) m31001| Thu Jan 17 16:44:32.231 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state RECOVERING m31001| Thu Jan 17 16:44:32.231 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state RECOVERING m31000| Thu Jan 17 16:44:32.629 [rsMgr] replSet PRIMARY m31002| Thu Jan 17 16:44:32.894 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state RECOVERING m31002| Thu Jan 17 16:44:32.894 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state RECOVERING m31002| Thu Jan 17 16:44:32.894 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state PRIMARY m31003| Thu Jan 17 16:44:33.504 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state RECOVERING m31003| Thu Jan 17 16:44:33.504 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state PRIMARY m31003| Thu Jan 17 16:44:33.504 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state RECOVERING m31000| Thu Jan 17 16:44:33.629 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state RECOVERING m31000| Thu Jan 17 16:44:33.629 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state RECOVERING m31000| Thu Jan 17 16:44:33.630 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state RECOVERING m31001| Thu Jan 17 16:44:34.231 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state PRIMARY m31002| Thu Jan 17 16:44:35.630 [conn2] end connection 127.0.0.2:38393 (3 connections now open) m31002| Thu Jan 17 16:44:35.630 [initandlisten] connection accepted from 127.0.0.2:60301 #5 (5 connections now open) m31002| Thu Jan 17 16:44:38.232 [conn3] end connection 127.0.0.3:54117 (3 connections now open) m31002| Thu Jan 17 16:44:38.232 [initandlisten] connection accepted from 127.0.0.3:34794 #6 (4 connections now open) m31001| Thu Jan 17 16:44:38.895 [conn4] end connection 127.0.0.4:57631 (3 connections now open) m31001| Thu Jan 17 16:44:38.895 [initandlisten] connection accepted from 127.0.0.4:53710 #6 (4 connections now open) m31001| Thu Jan 17 16:44:39.505 [conn5] end connection 127.0.0.5:55339 (3 connections now open) m31001| Thu Jan 17 16:44:39.505 [initandlisten] connection accepted from 127.0.0.5:54995 #7 (4 connections now open) m31001| Thu Jan 17 16:44:40.141 [rsSync] replSet initial sync pending m31001| Thu Jan 17 16:44:40.141 [rsSync] replSet syncing to: 127.0.0.2:31000 m31000| Thu Jan 17 16:44:40.142 [initandlisten] connection accepted from 127.0.0.3:52679 #8 (5 connections now open) m31001| Thu Jan 17 16:44:40.143 [rsSync] build index local.me { _id: 1 } m31001| Thu Jan 17 16:44:40.143 [rsSync] build index done. scanned 0 total records. 0 secs m31001| Thu Jan 17 16:44:40.144 [rsSync] replSet initial sync drop all databases m31001| Thu Jan 17 16:44:40.144 [rsSync] dropAllDatabasesExceptLocal 1 m31001| Thu Jan 17 16:44:40.144 [rsSync] replSet initial sync clone all databases m31001| Thu Jan 17 16:44:40.145 [rsSync] replSet initial sync data copy, starting syncup m31001| Thu Jan 17 16:44:40.145 [rsSync] oplog sync 1 of 3 m31001| Thu Jan 17 16:44:40.145 [rsSync] oplog sync 2 of 3 m31001| Thu Jan 17 16:44:40.145 [rsSync] replSet initial sync building indexes m31001| Thu Jan 17 16:44:40.145 [rsSync] oplog sync 3 of 3 m31001| Thu Jan 17 16:44:40.145 [rsSync] replSet initial sync finishing up m31001| Thu Jan 17 16:44:40.163 [rsSync] replSet set minValid=50f870af:1 m31001| Thu Jan 17 16:44:40.163 [rsSync] build index local.replset.minvalid { _id: 1 } m31001| Thu Jan 17 16:44:40.164 [rsSync] build index done. scanned 0 total records. 0 secs m31001| Thu Jan 17 16:44:40.164 [rsSync] replSet initial sync done m31000| Thu Jan 17 16:44:40.165 [conn8] end connection 127.0.0.3:52679 (4 connections now open) m31001| Thu Jan 17 16:44:40.444 [rsBackgroundSync] replSet syncing to: 127.0.0.2:31000 m31000| Thu Jan 17 16:44:40.444 [initandlisten] connection accepted from 127.0.0.3:36604 #9 (5 connections now open) m31002| Thu Jan 17 16:44:41.034 [rsSync] replSet initial sync pending m31002| Thu Jan 17 16:44:41.034 [rsSync] replSet syncing to: 127.0.0.2:31000 m31000| Thu Jan 17 16:44:41.034 [initandlisten] connection accepted from 127.0.0.4:36255 #10 (6 connections now open) m31002| Thu Jan 17 16:44:41.035 [rsSync] build index local.me { _id: 1 } m31002| Thu Jan 17 16:44:41.036 [rsSync] build index done. scanned 0 total records. 0 secs m31002| Thu Jan 17 16:44:41.037 [rsSync] replSet initial sync drop all databases m31002| Thu Jan 17 16:44:41.037 [rsSync] dropAllDatabasesExceptLocal 1 m31002| Thu Jan 17 16:44:41.037 [rsSync] replSet initial sync clone all databases m31002| Thu Jan 17 16:44:41.037 [rsSync] replSet initial sync data copy, starting syncup m31002| Thu Jan 17 16:44:41.037 [rsSync] oplog sync 1 of 3 m31002| Thu Jan 17 16:44:41.037 [rsSync] oplog sync 2 of 3 m31002| Thu Jan 17 16:44:41.038 [rsSync] replSet initial sync building indexes m31002| Thu Jan 17 16:44:41.038 [rsSync] oplog sync 3 of 3 m31002| Thu Jan 17 16:44:41.038 [rsSync] replSet initial sync finishing up m31002| Thu Jan 17 16:44:41.059 [rsSync] replSet set minValid=50f870af:1 m31002| Thu Jan 17 16:44:41.060 [rsSync] build index local.replset.minvalid { _id: 1 } m31002| Thu Jan 17 16:44:41.060 [rsSync] build index done. scanned 0 total records. 0 secs m31002| Thu Jan 17 16:44:41.060 [rsSync] replSet initial sync done m31000| Thu Jan 17 16:44:41.061 [conn10] end connection 127.0.0.4:36255 (5 connections now open) m31002| Thu Jan 17 16:44:41.102 [rsBackgroundSync] replSet syncing to: 127.0.0.2:31000 m31000| Thu Jan 17 16:44:41.102 [initandlisten] connection accepted from 127.0.0.4:38612 #11 (6 connections now open) m31001| Thu Jan 17 16:44:41.165 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.2:31000 m31000| Thu Jan 17 16:44:41.165 [initandlisten] connection accepted from 127.0.0.3:59736 #12 (7 connections now open) m31003| Thu Jan 17 16:44:41.926 [rsSync] replSet initial sync pending m31003| Thu Jan 17 16:44:41.926 [rsSync] replSet syncing to: 127.0.0.2:31000 m31000| Thu Jan 17 16:44:41.926 [initandlisten] connection accepted from 127.0.0.5:36714 #13 (8 connections now open) m31003| Thu Jan 17 16:44:41.927 [rsSync] build index local.me { _id: 1 } m31003| Thu Jan 17 16:44:41.928 [rsSync] build index done. scanned 0 total records. 0 secs m31003| Thu Jan 17 16:44:41.928 [rsSync] replSet initial sync drop all databases m31003| Thu Jan 17 16:44:41.928 [rsSync] dropAllDatabasesExceptLocal 1 m31003| Thu Jan 17 16:44:41.928 [rsSync] replSet initial sync clone all databases m31003| Thu Jan 17 16:44:41.929 [rsSync] replSet initial sync data copy, starting syncup m31003| Thu Jan 17 16:44:41.929 [rsSync] oplog sync 1 of 3 m31003| Thu Jan 17 16:44:41.929 [rsSync] oplog sync 2 of 3 m31003| Thu Jan 17 16:44:41.929 [rsSync] replSet initial sync building indexes m31003| Thu Jan 17 16:44:41.929 [rsSync] oplog sync 3 of 3 m31003| Thu Jan 17 16:44:41.930 [rsSync] replSet initial sync finishing up m31003| Thu Jan 17 16:44:41.945 [rsSync] replSet set minValid=50f870af:1 m31003| Thu Jan 17 16:44:41.945 [rsSync] build index local.replset.minvalid { _id: 1 } m31003| Thu Jan 17 16:44:41.945 [rsSync] build index done. scanned 0 total records. 0 secs m31003| Thu Jan 17 16:44:41.946 [rsSync] replSet initial sync done m31000| Thu Jan 17 16:44:41.946 [conn13] end connection 127.0.0.5:36714 (7 connections now open) m31002| Thu Jan 17 16:44:42.061 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.2:31000 m31000| Thu Jan 17 16:44:42.061 [initandlisten] connection accepted from 127.0.0.4:39745 #14 (8 connections now open) m31003| Thu Jan 17 16:44:42.157 [rsBackgroundSync] replSet syncing to: 127.0.0.2:31000 m31000| Thu Jan 17 16:44:42.157 [initandlisten] connection accepted from 127.0.0.5:41399 #15 (9 connections now open) m31001| Thu Jan 17 16:44:42.165 [rsSync] replSet SECONDARY m31000| Thu Jan 17 16:44:42.167 [slaveTracking] build index local.slaves { _id: 1 } m31000| Thu Jan 17 16:44:42.167 [slaveTracking] build index done. scanned 0 total records. 0 secs m31002| Thu Jan 17 16:44:42.896 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state SECONDARY m31003| Thu Jan 17 16:44:42.946 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.2:31000 m31000| Thu Jan 17 16:44:42.947 [initandlisten] connection accepted from 127.0.0.5:58490 #16 (10 connections now open) m31002| Thu Jan 17 16:44:43.061 [rsSync] replSet SECONDARY m31003| Thu Jan 17 16:44:43.506 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state SECONDARY m31003| Thu Jan 17 16:44:43.506 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state SECONDARY m31000| Thu Jan 17 16:44:43.631 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state SECONDARY m31000| Thu Jan 17 16:44:43.632 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state SECONDARY m31003| Thu Jan 17 16:44:43.947 [rsSync] replSet SECONDARY ---- Started set! ---- { "_id" : "testReplSet", "members" : [ { "_id" : 0, "host" : "127.0.0.2:31000", "priority" : 1 }, { "_id" : 1, "host" : "127.0.0.3:31001", "priority" : 0 }, { "_id" : 2, "host" : "127.0.0.4:31002", "priority" : 0 }, { "_id" : 3, "host" : "127.0.0.5:31003", "priority" : 0 } ] } Reconfiguring replica set... { "replSetReconfig" : { "_id" : "testReplSet", "members" : [ { "_id" : 0, "host" : "127.0.0.2:31000", "priority" : 1 }, { "_id" : 1, "host" : "127.0.0.3:31001", "priority" : 0 }, { "_id" : 2, "host" : "127.0.0.4:31002", "priority" : 0 }, { "_id" : 3, "host" : "127.0.0.5:31003", "priority" : 0 } ], "version" : 2 } } m31000| Thu Jan 17 16:44:44.105 [conn1] replSet replSetReconfig config object parses ok, 4 members specified m31000| Thu Jan 17 16:44:44.106 [conn1] replSet replSetReconfig [2] m31000| Thu Jan 17 16:44:44.106 [conn1] replSet info saving a newer config version to local.system.replset m31000| Thu Jan 17 16:44:44.118 [conn1] replSet saveConfigLocally done m31000| Thu Jan 17 16:44:44.118 [conn1] replSet relinquishing primary state m31000| Thu Jan 17 16:44:44.118 [conn1] replSet SECONDARY m31000| Thu Jan 17 16:44:44.118 [conn1] replSet closing client sockets after relinquishing primary Thu Jan 17 16:44:44.119 DBClientCursor::init call() failed m31002| Thu Jan 17 16:44:44.119 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: 127.0.0.2:31000 m31003| Thu Jan 17 16:44:44.119 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: 127.0.0.2:31000 Caught exception error doing query: failed, this is normal after reconfig. m31001| Thu Jan 17 16:44:44.119 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: 127.0.0.2:31000 m31002| Thu Jan 17 16:44:44.119 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000 m31003| Thu Jan 17 16:44:44.119 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000 m31000| Thu Jan 17 16:44:44.119 [conn1] replSet PRIMARY m31001| Thu Jan 17 16:44:44.119 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000 m31000| Thu Jan 17 16:44:44.119 [conn9] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.3:36604] m31000| Thu Jan 17 16:44:44.119 [conn11] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.4:38612] Thu Jan 17 16:44:44.119 trying reconnect to 127.0.0.2:31000 m31000| Thu Jan 17 16:44:44.119 [conn1] replSet replSetReconfig new config saved locally m31000| Thu Jan 17 16:44:44.119 [rsHealthPoll] replSet member 127.0.0.5:31003 is up m31000| Thu Jan 17 16:44:44.119 [conn14] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.4:39745] m31000| Thu Jan 17 16:44:44.119 [conn16] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.5:58490] m31000| Thu Jan 17 16:44:44.119 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state SECONDARY m31000| Thu Jan 17 16:44:44.119 [rsHealthPoll] replSet member 127.0.0.4:31002 is up m31000| Thu Jan 17 16:44:44.119 [rsMgr] can't see a majority of the set, relinquishing primary m31000| Thu Jan 17 16:44:44.119 [conn15] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.5:41399] m31000| Thu Jan 17 16:44:44.119 [rsMgr] replSet relinquishing primary state m31000| Thu Jan 17 16:44:44.119 [rsMgr] replSet SECONDARY m31000| Thu Jan 17 16:44:44.119 [conn12] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.3:59736] m31000| Thu Jan 17 16:44:44.119 [conn1] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:56353] m31000| Thu Jan 17 16:44:44.119 [rsHealthPoll] replSet member 127.0.0.3:31001 is up m31000| Thu Jan 17 16:44:44.119 [rsMgr] replSet closing client sockets after relinquishing primary Thu Jan 17 16:44:44.120 reconnect 127.0.0.2:31000 ok m31000| Thu Jan 17 16:44:44.120 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state SECONDARY m31000| Thu Jan 17 16:44:44.120 [initandlisten] connection accepted from 127.0.0.1:56373 #17 (4 connections now open) m31000| Thu Jan 17 16:44:44.120 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state SECONDARY m31000| Thu Jan 17 16:44:44.120 [rsMgr] not electing self, 127.0.0.5:31003 would veto with '127.0.0.2:31000 is trying to elect itself but 127.0.0.2:31000 is already primary and more up-to-date' m31000| Thu Jan 17 16:44:44.120 [rsMgr] not electing self, 127.0.0.5:31003 would veto with '127.0.0.2:31000 is trying to elect itself but 127.0.0.2:31000 is already primary and more up-to-date' m31001| Thu Jan 17 16:44:44.233 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state SECONDARY m31001| Thu Jan 17 16:44:44.233 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state SECONDARY m31001| Thu Jan 17 16:44:44.233 [rsMgr] replset msgReceivedNewConfig version: version: 2 m31001| Thu Jan 17 16:44:44.233 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state SECONDARY m31001| Thu Jan 17 16:44:44.233 [rsMgr] replSet info saving a newer config version to local.system.replset m31001| Thu Jan 17 16:44:44.234 [rsMgr] replSet saveConfigLocally done m31001| Thu Jan 17 16:44:44.234 [rsMgr] replSet replSetReconfig new config saved locally m31001| Thu Jan 17 16:44:44.235 [rsHealthPoll] replSet member 127.0.0.4:31002 is up m31001| Thu Jan 17 16:44:44.235 [rsHealthPoll] replSet member 127.0.0.5:31003 is up m31001| Thu Jan 17 16:44:44.235 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state SECONDARY m31001| Thu Jan 17 16:44:44.235 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state SECONDARY m31001| Thu Jan 17 16:44:44.235 [rsHealthPoll] replSet member 127.0.0.2:31000 is up m31001| Thu Jan 17 16:44:44.235 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state SECONDARY m31002| Thu Jan 17 16:44:44.896 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state SECONDARY m31002| Thu Jan 17 16:44:44.896 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state SECONDARY m31002| Thu Jan 17 16:44:44.896 [rsMgr] replset msgReceivedNewConfig version: version: 2 m31002| Thu Jan 17 16:44:44.896 [rsMgr] replSet info saving a newer config version to local.system.replset m31002| Thu Jan 17 16:44:44.897 [rsMgr] replSet saveConfigLocally done m31002| Thu Jan 17 16:44:44.897 [rsMgr] replSet replSetReconfig new config saved locally m31002| Thu Jan 17 16:44:44.898 [rsMgr] replset msgReceivedNewConfig version: version: 2 m31002| Thu Jan 17 16:44:44.898 [rsMgr] replSet info msgReceivedNewConfig but version isn't higher 2 2 m31002| Thu Jan 17 16:44:44.898 [rsHealthPoll] replSet member 127.0.0.5:31003 is up m31002| Thu Jan 17 16:44:44.898 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state SECONDARY m31002| Thu Jan 17 16:44:44.898 [rsHealthPoll] replSet member 127.0.0.2:31000 is up m31002| Thu Jan 17 16:44:44.898 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state SECONDARY m31002| Thu Jan 17 16:44:44.898 [rsHealthPoll] replSet member 127.0.0.3:31001 is up m31002| Thu Jan 17 16:44:44.898 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state SECONDARY m31003| Thu Jan 17 16:44:45.506 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state SECONDARY m31003| Thu Jan 17 16:44:45.506 [rsMgr] replset msgReceivedNewConfig version: version: 2 m31003| Thu Jan 17 16:44:45.507 [rsMgr] replSet info saving a newer config version to local.system.replset m31003| Thu Jan 17 16:44:45.508 [rsMgr] replSet saveConfigLocally done m31003| Thu Jan 17 16:44:45.508 [rsMgr] replSet replSetReconfig new config saved locally m31003| Thu Jan 17 16:44:45.508 [rsMgr] replset msgReceivedNewConfig version: version: 2 m31003| Thu Jan 17 16:44:45.508 [rsMgr] replSet info msgReceivedNewConfig but version isn't higher 2 2 m31003| Thu Jan 17 16:44:45.508 [rsMgr] replset msgReceivedNewConfig version: version: 2 m31003| Thu Jan 17 16:44:45.508 [rsMgr] replSet info msgReceivedNewConfig but version isn't higher 2 2 m31003| Thu Jan 17 16:44:45.508 [rsHealthPoll] replSet member 127.0.0.4:31002 is up m31003| Thu Jan 17 16:44:45.508 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state SECONDARY m31003| Thu Jan 17 16:44:45.508 [rsHealthPoll] replSet member 127.0.0.2:31000 is up m31003| Thu Jan 17 16:44:45.508 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state SECONDARY m31003| Thu Jan 17 16:44:45.508 [rsHealthPoll] replSet member 127.0.0.3:31001 is up m31003| Thu Jan 17 16:44:45.508 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state SECONDARY m31000| Thu Jan 17 16:44:45.631 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31000| Thu Jan 17 16:44:45.631 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31000| Thu Jan 17 16:44:45.632 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31002| Thu Jan 17 16:44:46.120 [conn5] end connection 127.0.0.2:60301 (3 connections now open) m31002| Thu Jan 17 16:44:46.120 [initandlisten] connection accepted from 127.0.0.2:42338 #7 (4 connections now open) m31001| Thu Jan 17 16:44:46.233 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31001| Thu Jan 17 16:44:46.233 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31001| Thu Jan 17 16:44:46.233 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31002| Thu Jan 17 16:44:46.235 [conn6] end connection 127.0.0.3:34794 (3 connections now open) m31002| Thu Jan 17 16:44:46.235 [initandlisten] connection accepted from 127.0.0.3:41230 #8 (5 connections now open) m31002| Thu Jan 17 16:44:46.896 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31002| Thu Jan 17 16:44:46.896 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31002| Thu Jan 17 16:44:46.897 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31001| Thu Jan 17 16:44:46.898 [conn6] end connection 127.0.0.4:53710 (3 connections now open) m31001| Thu Jan 17 16:44:46.898 [initandlisten] connection accepted from 127.0.0.4:36305 #8 (4 connections now open) m31003| Thu Jan 17 16:44:47.507 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31003| Thu Jan 17 16:44:47.507 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31003| Thu Jan 17 16:44:47.507 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31001| Thu Jan 17 16:44:47.509 [conn7] end connection 127.0.0.5:54995 (3 connections now open) m31001| Thu Jan 17 16:44:47.509 [initandlisten] connection accepted from 127.0.0.5:56488 #9 (4 connections now open) m31000| Thu Jan 17 16:44:50.121 [rsMgr] replSet info electSelf 0 m31001| Thu Jan 17 16:44:50.121 [conn3] replSet info voting yea for 127.0.0.2:31000 (0) m31002| Thu Jan 17 16:44:50.121 [conn7] replSet info voting yea for 127.0.0.2:31000 (0) m31003| Thu Jan 17 16:44:50.121 [conn2] replSet info voting yea for 127.0.0.2:31000 (0) m31000| Thu Jan 17 16:44:50.630 [rsMgr] replSet PRIMARY m31002| Thu Jan 17 16:44:50.899 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state PRIMARY m31003| Thu Jan 17 16:44:51.509 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state PRIMARY Running: sudo bash -c tc class add dev lo parent 1:1 classid 1:11 htb rate 1bps ceil 1000Mbps > /tmp/output Thu Jan 17 16:44:52.133 shell: started program sudo bash -c tc class add dev lo parent 1:1 classid 1:11 htb rate 1bps ceil 1000Mbps > /tmp/output Running: sudo bash -c tc filter add dev lo parent 1: prio 1 u32 match ip src 127.0.0.2 match ip dst 127.0.0.5 flowid 1:11 > /tmp/output Thu Jan 17 16:44:52.148 shell: started program sudo bash -c tc filter add dev lo parent 1: prio 1 u32 match ip src 127.0.0.2 match ip dst 127.0.0.5 flowid 1:11 > /tmp/output Running: sudo bash -c tc qdisc add dev lo parent 1:11 handle 11: netem > /tmp/output Thu Jan 17 16:44:52.162 shell: started program sudo bash -c tc qdisc add dev lo parent 1:11 handle 11: netem > /tmp/output Running: sudo bash -c tc qdisc change dev lo parent 1:11 handle 11: netem delay 40000ms > /tmp/output Thu Jan 17 16:44:52.176 shell: started program sudo bash -c tc qdisc change dev lo parent 1:11 handle 11: netem delay 40000ms > /tmp/output Running: sudo bash -c tc class add dev lo parent 1:1 classid 1:12 htb rate 1bps ceil 1000Mbps > /tmp/output Thu Jan 17 16:44:52.191 shell: started program sudo bash -c tc class add dev lo parent 1:1 classid 1:12 htb rate 1bps ceil 1000Mbps > /tmp/output Running: sudo bash -c tc filter add dev lo parent 1: prio 1 u32 match ip src 127.0.0.5 match ip dst 127.0.0.2 flowid 1:12 > /tmp/output Thu Jan 17 16:44:52.202 shell: started program sudo bash -c tc filter add dev lo parent 1: prio 1 u32 match ip src 127.0.0.5 match ip dst 127.0.0.2 flowid 1:12 > /tmp/output Running: sudo bash -c tc qdisc add dev lo parent 1:12 handle 12: netem > /tmp/output Thu Jan 17 16:44:52.215 shell: started program sudo bash -c tc qdisc add dev lo parent 1:12 handle 12: netem > /tmp/output Running: sudo bash -c tc qdisc change dev lo parent 1:12 handle 12: netem delay 40000ms > /tmp/output Thu Jan 17 16:44:52.228 shell: started program sudo bash -c tc qdisc change dev lo parent 1:12 handle 12: netem delay 40000ms > /tmp/output m31001| Thu Jan 17 16:44:52.236 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state PRIMARY Running: sudo bash -c tc -s -d -iec qdisc show dev lo > /tmp/output Thu Jan 17 16:44:52.241 shell: started program sudo bash -c tc -s -d -iec qdisc show dev lo > /tmp/output qdisc htb 1: root refcnt 2 r2q 10 default 0 direct_packets_stat 2329 ver 3.17 Sent 406181 bytes 2329 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc netem 11: parent 1:11 limit 1000 delay 40.0s Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 qdisc netem 12: parent 1:12 limit 1000 delay 40.0s Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 { "_id" : "testReplSet", "members" : [ { "_id" : 0, "host" : "127.0.0.2:31000", "priority" : 1 }, { "_id" : 1, "host" : "127.0.0.3:31001", "priority" : 0, "slaveDelay" : 0 }, { "_id" : 2, "host" : "127.0.0.4:31002", "priority" : 0, "slaveDelay" : 40 }, { "_id" : 3, "host" : "127.0.0.5:31003", "priority" : 0 } ] } Reconfiguring replica set... { "replSetReconfig" : { "_id" : "testReplSet", "members" : [ { "_id" : 0, "host" : "127.0.0.2:31000", "priority" : 1 }, { "_id" : 1, "host" : "127.0.0.3:31001", "priority" : 0, "slaveDelay" : 0 }, { "_id" : 2, "host" : "127.0.0.4:31002", "priority" : 0, "slaveDelay" : 40 }, { "_id" : 3, "host" : "127.0.0.5:31003", "priority" : 0 } ], "version" : 3 } } m31000| Thu Jan 17 16:44:52.255 [conn17] replSet replSetReconfig config object parses ok, 4 members specified m31002| Thu Jan 17 16:44:54.119 [rsBackgroundSync] replSet syncing to: 127.0.0.2:31000 m31003| Thu Jan 17 16:44:54.119 [rsBackgroundSync] replSet syncing to: 127.0.0.2:31000 m31001| Thu Jan 17 16:44:54.119 [rsBackgroundSync] replSet syncing to: 127.0.0.2:31000 m31000| Thu Jan 17 16:44:54.119 [initandlisten] connection accepted from 127.0.0.4:41483 #18 (5 connections now open) m31000| Thu Jan 17 16:44:54.120 [initandlisten] connection accepted from 127.0.0.3:35580 #19 (6 connections now open) m31001| Thu Jan 17 16:44:54.120 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.2:31000 m31002| Thu Jan 17 16:44:54.121 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.2:31000 m31000| Thu Jan 17 16:44:54.121 [initandlisten] connection accepted from 127.0.0.3:56746 #20 (7 connections now open) m31000| Thu Jan 17 16:44:54.121 [initandlisten] connection accepted from 127.0.0.4:55227 #21 (8 connections now open) m31003| Thu Jan 17 16:45:00.237 [conn3] end connection 127.0.0.3:34297 (3 connections now open) m31003| Thu Jan 17 16:45:00.237 [initandlisten] connection accepted from 127.0.0.3:43395 #5 (4 connections now open) m31003| Thu Jan 17 16:45:00.900 [conn4] end connection 127.0.0.4:54116 (3 connections now open) m31003| Thu Jan 17 16:45:00.901 [initandlisten] connection accepted from 127.0.0.4:37206 #6 (4 connections now open) m31002| Thu Jan 17 16:45:01.511 [conn4] end connection 127.0.0.5:59512 (3 connections now open) m31002| Thu Jan 17 16:45:01.511 [initandlisten] connection accepted from 127.0.0.5:54992 #9 (4 connections now open) m31001| Thu Jan 17 16:45:02.123 [conn3] end connection 127.0.0.2:56485 (3 connections now open) m31001| Thu Jan 17 16:45:02.123 [initandlisten] connection accepted from 127.0.0.2:58169 #10 (4 connections now open) m31000| Thu Jan 17 16:45:02.238 [conn5] end connection 127.0.0.3:43578 (7 connections now open) m31000| Thu Jan 17 16:45:02.238 [initandlisten] connection accepted from 127.0.0.3:44065 #22 (8 connections now open) m31000| Thu Jan 17 16:45:02.255 [conn17] DBClientCursor::init call() failed m31000| Thu Jan 17 16:45:02.255 [conn17] replSet cmufcc requestHeartbeat 127.0.0.5:31003 : 10276 DBClientBase::findN: transport error: 127.0.0.5:31003 ns: admin.$cmd query: { replSetHeartbeat: "testReplSet", v: -1, pv: 1, checkEmpty: false, from: "" } m31000| Thu Jan 17 16:45:02.255 [conn17] replSet replSetReconfig [2] m31000| Thu Jan 17 16:45:02.255 [conn17] replSet info saving a newer config version to local.system.replset m31000| Thu Jan 17 16:45:02.269 [conn17] replSet saveConfigLocally done m31000| Thu Jan 17 16:45:02.269 [conn17] replSet relinquishing primary state m31000| Thu Jan 17 16:45:02.269 [conn17] replSet SECONDARY m31000| Thu Jan 17 16:45:02.269 [conn17] replSet closing client sockets after relinquishing primary m31000| Thu Jan 17 16:45:02.269 [conn17] replSet PRIMARY Thu Jan 17 16:45:02.269 DBClientCursor::init call() failed m31001| Thu Jan 17 16:45:02.269 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000 m31002| Thu Jan 17 16:45:02.269 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000 m31000| Thu Jan 17 16:45:02.269 [conn17] replSet replSetReconfig new config saved locally m31000| Thu Jan 17 16:45:02.269 [conn19] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.3:35580] Caught exception error doing query: failed, this is normal after reconfig. m31000| Thu Jan 17 16:45:02.269 [conn17] command admin.$cmd command: { replSetReconfig: { _id: "testReplSet", members: [ { _id: 0.0, host: "127.0.0.2:31000", priority: 1.0 }, { _id: 1.0, host: "127.0.0.3:31001", priority: 0.0, slaveDelay: 0.0 }, { _id: 2.0, host: "127.0.0.4:31002", priority: 0.0, slaveDelay: 40.0 }, { _id: 3.0, host: "127.0.0.5:31003", priority: 0.0 } ], version: 3.0 } } ntoreturn:1 keyUpdates:0 locks(micros) W:13886 reslen:71 10014ms m31000| Thu Jan 17 16:45:02.269 [conn18] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.4:41483] m31000| Thu Jan 17 16:45:02.269 [conn17] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:56373] m31000| Thu Jan 17 16:45:02.269 [rsHealthPoll] replSet member 127.0.0.3:31001 is up m31000| Thu Jan 17 16:45:02.270 [rsHealthPoll] replSet member 127.0.0.4:31002 is up m31000| Thu Jan 17 16:45:02.270 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state SECONDARY m31000| Thu Jan 17 16:45:02.270 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state SECONDARY Thu Jan 17 16:45:02.270 trying reconnect to 127.0.0.2:31000 Thu Jan 17 16:45:02.271 reconnect 127.0.0.2:31000 ok m31000| Thu Jan 17 16:45:02.271 [initandlisten] connection accepted from 127.0.0.1:56374 #23 (6 connections now open) m31000| Thu Jan 17 16:45:02.271 [FileAllocator] allocating new datafile /data/db/testReplSet-0/foo.ns, filling with zeroes... { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:02Z"), "myState" : 2, "syncingTo" : "127.0.0.2:31000", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 17, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:44:51Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:44:52Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 17, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:01Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:45:02Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 17, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:01Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 49, "optime" : { "t" : 1358459055000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:15Z"), "errmsg" : "syncing to: 127.0.0.2:31000", "self" : true } ], "ok" : 1 } m31000| Thu Jan 17 16:45:02.422 [FileAllocator] done allocating datafile /data/db/testReplSet-0/foo.ns, size: 16MB, took 0.15 secs m31000| Thu Jan 17 16:45:02.422 [FileAllocator] allocating new datafile /data/db/testReplSet-0/foo.0, filling with zeroes... m31000| Thu Jan 17 16:45:02.572 [FileAllocator] done allocating datafile /data/db/testReplSet-0/foo.0, size: 16MB, took 0.149 secs m31000| Thu Jan 17 16:45:02.573 [conn23] build index foo.bar { _id: 1 } m31000| Thu Jan 17 16:45:02.574 [conn23] build index done. scanned 0 total records. 0 secs m31000| Thu Jan 17 16:45:02.574 [conn23] insert foo.bar ninserted:1 keyUpdates:0 locks(micros) w:302750 302ms { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:02Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 51, "optime" : { "t" : 1358459102000, "i" : 2 }, "optimeDate" : ISODate("2013-01-17T21:45:02Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 0, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:02Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 0, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:02Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.2:31000, it : 0, expected : 127.0.0.3:31001 m31000| Thu Jan 17 16:45:02.901 [conn6] end connection 127.0.0.4:53212 (5 connections now open) m31000| Thu Jan 17 16:45:02.901 [initandlisten] connection accepted from 127.0.0.4:49697 #24 (6 connections now open) m31002| Thu Jan 17 16:45:02.901 [rsMgr] replset msgReceivedNewConfig version: version: 3 m31002| Thu Jan 17 16:45:02.902 [rsMgr] replSet info saving a newer config version to local.system.replset m31002| Thu Jan 17 16:45:02.902 [rsMgr] replSet saveConfigLocally done m31002| Thu Jan 17 16:45:02.903 [rsMgr] replSet replSetReconfig new config saved locally m31002| Thu Jan 17 16:45:02.903 [rsHealthPoll] replSet member 127.0.0.3:31001 is up m31002| Thu Jan 17 16:45:02.903 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state SECONDARY m31002| Thu Jan 17 16:45:02.903 [rsHealthPoll] replSet member 127.0.0.5:31003 is up m31002| Thu Jan 17 16:45:02.903 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state SECONDARY m31002| Thu Jan 17 16:45:02.903 [rsHealthPoll] replSet member 127.0.0.2:31000 is up m31002| Thu Jan 17 16:45:02.903 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state PRIMARY m31003| Thu Jan 17 16:45:03.507 [rsHealthPoll] DBClientCursor::init call() failed m31003| Thu Jan 17 16:45:03.507 [rsHealthPoll] replset info 127.0.0.2:31000 heartbeat failed, retrying m31003| Thu Jan 17 16:45:03.511 [rsMgr] replset msgReceivedNewConfig version: version: 3 m31003| Thu Jan 17 16:45:03.512 [rsMgr] replSet info saving a newer config version to local.system.replset m31003| Thu Jan 17 16:45:03.513 [rsMgr] replSet saveConfigLocally done m31003| Thu Jan 17 16:45:03.513 [rsMgr] replSet replSetReconfig new config saved locally m31003| Thu Jan 17 16:45:03.513 [rsHealthPoll] replSet member 127.0.0.4:31002 is up m31003| Thu Jan 17 16:45:03.513 [rsHealthPoll] replSet member 127.0.0.3:31001 is up m31003| Thu Jan 17 16:45:03.513 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state SECONDARY m31003| Thu Jan 17 16:45:03.513 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state SECONDARY m31000| Thu Jan 17 16:45:04.123 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31000| Thu Jan 17 16:45:04.123 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31001| Thu Jan 17 16:45:04.238 [rsMgr] replset msgReceivedNewConfig version: version: 3 m31001| Thu Jan 17 16:45:04.239 [rsMgr] replSet info saving a newer config version to local.system.replset m31001| Thu Jan 17 16:45:04.240 [rsMgr] replSet saveConfigLocally done m31001| Thu Jan 17 16:45:04.240 [rsMgr] replSet replSetReconfig new config saved locally m31001| Thu Jan 17 16:45:04.240 [rsMgr] replset msgReceivedNewConfig version: version: 3 m31001| Thu Jan 17 16:45:04.240 [rsMgr] replSet info msgReceivedNewConfig but version isn't higher 3 3 m31001| Thu Jan 17 16:45:04.240 [rsMgr] replset msgReceivedNewConfig version: version: 3 m31001| Thu Jan 17 16:45:04.240 [rsMgr] replSet info msgReceivedNewConfig but version isn't higher 3 3 m31001| Thu Jan 17 16:45:04.240 [rsHealthPoll] replSet member 127.0.0.4:31002 is up m31001| Thu Jan 17 16:45:04.240 [rsHealthPoll] replSet member 127.0.0.5:31003 is up m31001| Thu Jan 17 16:45:04.240 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state SECONDARY m31001| Thu Jan 17 16:45:04.240 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state SECONDARY m31001| Thu Jan 17 16:45:04.240 [rsHealthPoll] replSet member 127.0.0.2:31000 is up m31001| Thu Jan 17 16:45:04.240 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state PRIMARY m31003| Thu Jan 17 16:45:04.507 [rsHealthPoll] replSet info 127.0.0.2:31000 is down (or slow to respond): m31003| Thu Jan 17 16:45:04.507 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state DOWN m31003| Thu Jan 17 16:45:04.507 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31003| Thu Jan 17 16:45:04.507 [rsHealthPoll] replset info 127.0.0.2:31000 heartbeat failed, retrying m31003| Thu Jan 17 16:45:04.507 [rsHealthPoll] replSet info 127.0.0.2:31000 is down (or slow to respond): m31003| Thu Jan 17 16:45:04.507 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state DOWN m31003| Thu Jan 17 16:45:04.507 [rsMgr] replSet I don't see a primary and I can't elect myself m31002| Thu Jan 17 16:45:04.901 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31002| Thu Jan 17 16:45:04.901 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31002| Thu Jan 17 16:45:04.901 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31003| Thu Jan 17 16:45:05.511 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31003| Thu Jan 17 16:45:05.512 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31001| Thu Jan 17 16:45:06.238 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31001| Thu Jan 17 16:45:06.239 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31001| Thu Jan 17 16:45:06.239 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31003| Thu Jan 17 16:45:06.904 [conn6] end connection 127.0.0.4:37206 (3 connections now open) m31003| Thu Jan 17 16:45:06.904 [initandlisten] connection accepted from 127.0.0.4:44491 #7 (4 connections now open) m31002| Thu Jan 17 16:45:07.514 [conn9] end connection 127.0.0.5:54992 (3 connections now open) m31002| Thu Jan 17 16:45:07.514 [initandlisten] connection accepted from 127.0.0.5:37949 #10 (4 connections now open) { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:07Z"), "myState" : 2, "syncingTo" : "127.0.0.2:31000", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:04Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 4, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:07Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 4, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:07Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 54, "optime" : { "t" : 1358459055000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:15Z"), "errmsg" : "syncing to: 127.0.0.2:31000", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:07Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 56, "optime" : { "t" : 1358459107000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:07Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 5, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:06Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 5, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:06Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:45:06Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.2:31000, it : 0, expected : 127.0.0.3:31001 m31003| Thu Jan 17 16:45:08.241 [conn5] end connection 127.0.0.3:43395 (3 connections now open) m31003| Thu Jan 17 16:45:08.241 [initandlisten] connection accepted from 127.0.0.3:46598 #8 (4 connections now open) m31001| Thu Jan 17 16:45:08.271 [conn10] end connection 127.0.0.2:58169 (3 connections now open) m31001| Thu Jan 17 16:45:08.271 [initandlisten] connection accepted from 127.0.0.2:52181 #11 (4 connections now open) m31000| Thu Jan 17 16:45:08.904 [conn24] end connection 127.0.0.4:49697 (5 connections now open) m31000| Thu Jan 17 16:45:08.904 [initandlisten] connection accepted from 127.0.0.4:38141 #25 (6 connections now open) m31000| Thu Jan 17 16:45:10.241 [conn22] end connection 127.0.0.3:44065 (5 connections now open) m31000| Thu Jan 17 16:45:10.241 [initandlisten] connection accepted from 127.0.0.3:53118 #26 (6 connections now open) m31000| Thu Jan 17 16:45:12.255 [rsHealthPoll] replset info 127.0.0.5:31003 heartbeat failed, retrying m31001| Thu Jan 17 16:45:12.269 [rsBackgroundSync] replSet syncing to: 127.0.0.2:31000 m31002| Thu Jan 17 16:45:12.270 [rsBackgroundSync] replSet syncing to: 127.0.0.2:31000 m31000| Thu Jan 17 16:45:12.270 [initandlisten] connection accepted from 127.0.0.3:42212 #27 (7 connections now open) m31000| Thu Jan 17 16:45:12.270 [initandlisten] connection accepted from 127.0.0.4:34777 #28 (8 connections now open) m31001| Thu Jan 17 16:45:12.271 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.2:31000 m31000| Thu Jan 17 16:45:12.271 [conn20] end connection 127.0.0.3:56746 (7 connections now open) m31000| Thu Jan 17 16:45:12.271 [initandlisten] connection accepted from 127.0.0.3:57985 #29 (8 connections now open) m31001| Thu Jan 17 16:45:12.272 [FileAllocator] allocating new datafile /data/db/testReplSet-1/foo.ns, filling with zeroes... m31001| Thu Jan 17 16:45:12.425 [FileAllocator] done allocating datafile /data/db/testReplSet-1/foo.ns, size: 16MB, took 0.152 secs m31001| Thu Jan 17 16:45:12.425 [FileAllocator] allocating new datafile /data/db/testReplSet-1/foo.0, filling with zeroes... m31001| Thu Jan 17 16:45:12.575 [FileAllocator] done allocating datafile /data/db/testReplSet-1/foo.0, size: 16MB, took 0.149 secs m31001| Thu Jan 17 16:45:12.576 [repl writer worker 1] build index foo.bar { _id: 1 } m31001| Thu Jan 17 16:45:12.577 [repl writer worker 1] build index done. scanned 0 total records. 0 secs { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:12Z"), "myState" : 2, "syncingTo" : "127.0.0.2:31000", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:04Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 9, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:11Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:45:12Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 9, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:11Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 59, "optime" : { "t" : 1358459055000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:15Z"), "errmsg" : "syncing to: 127.0.0.2:31000", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:12Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 61, "optime" : { "t" : 1358459112000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:12Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 10, "optime" : { "t" : 1358459102000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:02Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:12Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 10, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:12Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.2:31000, it : 0, expected : 127.0.0.3:31001 m31000| Thu Jan 17 16:45:13.255 [rsHealthPoll] replSet info 127.0.0.5:31003 is down (or slow to respond): m31000| Thu Jan 17 16:45:13.255 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state DOWN m31000| Thu Jan 17 16:45:13.255 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31000| Thu Jan 17 16:45:13.255 [rsHealthPoll] replset info 127.0.0.5:31003 heartbeat failed, retrying m31000| Thu Jan 17 16:45:13.255 [rsHealthPoll] replSet info 127.0.0.5:31003 is down (or slow to respond): m31000| Thu Jan 17 16:45:13.255 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state DOWN m31003| Thu Jan 17 16:45:16.507 [rsHealthPoll] replset info 127.0.0.2:31000 heartbeat failed, retrying { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:17Z"), "myState" : 2, "syncingTo" : "127.0.0.2:31000", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:16Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 14, "optime" : { "t" : 1358459112000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:17Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 14, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:17Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 64, "optime" : { "t" : 1358459055000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:15Z"), "errmsg" : "syncing to: 127.0.0.2:31000", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:17Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 66, "optime" : { "t" : 1358459117000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:17Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 15, "optime" : { "t" : 1358459112000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:16Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 15, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:16Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:45:16Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:13Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.2:31000, it : 0, expected : 127.0.0.3:31001 m31002| Thu Jan 17 16:45:22.273 [conn7] end connection 127.0.0.2:42338 (3 connections now open) m31002| Thu Jan 17 16:45:22.273 [initandlisten] connection accepted from 127.0.0.2:36957 #11 (4 connections now open) { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:22Z"), "myState" : 2, "syncingTo" : "127.0.0.2:31000", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:16Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 19, "optime" : { "t" : 1358459117000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:17Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:21Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:45:22Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 19, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:21Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 69, "optime" : { "t" : 1358459055000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:15Z"), "errmsg" : "syncing to: 127.0.0.2:31000", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:22Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 71, "optime" : { "t" : 1358459122000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:22Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 20, "optime" : { "t" : 1358459117000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:17Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:22Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 20, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:22Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:13Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.2:31000, it : 0, expected : 127.0.0.3:31001 m31001| Thu Jan 17 16:45:22.906 [conn8] end connection 127.0.0.4:36305 (3 connections now open) m31001| Thu Jan 17 16:45:22.906 [initandlisten] connection accepted from 127.0.0.4:40761 #12 (4 connections now open) m31001| Thu Jan 17 16:45:23.517 [conn9] end connection 127.0.0.5:56488 (3 connections now open) m31001| Thu Jan 17 16:45:23.517 [initandlisten] connection accepted from 127.0.0.5:38854 #13 (4 connections now open) m31003| Thu Jan 17 16:45:24.119 [rsBackgroundSync] repl: couldn't connect to server 127.0.0.2:31000 m31003| Thu Jan 17 16:45:24.119 [rsBackgroundSync] replSet syncing to: 127.0.0.3:31001 m31001| Thu Jan 17 16:45:24.119 [initandlisten] connection accepted from 127.0.0.5:58203 #14 (5 connections now open) m31003| Thu Jan 17 16:45:24.120 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.3:31001 m31001| Thu Jan 17 16:45:24.121 [initandlisten] connection accepted from 127.0.0.5:34132 #15 (6 connections now open) m31003| Thu Jan 17 16:45:24.122 [FileAllocator] allocating new datafile /data/db/testReplSet-3/foo.ns, filling with zeroes... m31002| Thu Jan 17 16:45:24.244 [conn8] end connection 127.0.0.3:41230 (3 connections now open) m31002| Thu Jan 17 16:45:24.244 [initandlisten] connection accepted from 127.0.0.3:36137 #12 (4 connections now open) m31003| Thu Jan 17 16:45:24.267 [FileAllocator] done allocating datafile /data/db/testReplSet-3/foo.ns, size: 16MB, took 0.144 secs m31003| Thu Jan 17 16:45:24.267 [FileAllocator] allocating new datafile /data/db/testReplSet-3/foo.0, filling with zeroes... m31003| Thu Jan 17 16:45:24.417 [FileAllocator] done allocating datafile /data/db/testReplSet-3/foo.0, size: 16MB, took 0.149 secs m31003| Thu Jan 17 16:45:24.418 [repl writer worker 1] build index foo.bar { _id: 1 } m31003| Thu Jan 17 16:45:24.419 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31001| Thu Jan 17 16:45:24.460 [rsGhostSync] handshake between 3 and 127.0.0.2:31000 m31000| Thu Jan 17 16:45:24.460 [initandlisten] connection accepted from 127.0.0.3:58723 #30 (9 connections now open) m31000| Thu Jan 17 16:45:25.255 [rsHealthPoll] replset info 127.0.0.5:31003 heartbeat failed, retrying m31001| Thu Jan 17 16:45:25.460 [slaveTracking] build index local.slaves { _id: 1 } m31001| Thu Jan 17 16:45:25.461 [slaveTracking] build index done. scanned 0 total records. 0 secs { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:27Z"), "myState" : 2, "syncingTo" : "127.0.0.3:31001", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:16Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 24, "optime" : { "t" : 1358459122000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:22Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 24, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 74, "optime" : { "t" : 1358459122000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:22Z"), "errmsg" : "syncing to: 127.0.0.3:31001", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:27Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 76, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 25, "optime" : { "t" : 1358459122000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:22Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:26Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 25, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:26Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:45:26Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:25Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.3:31001, it : 0, expected : 127.0.0.3:31001 { "_id" : "testReplSet", "members" : [ { "_id" : 0, "host" : "127.0.0.2:31000", "priority" : 1 }, { "_id" : 1, "host" : "127.0.0.3:31001", "priority" : 0, "slaveDelay" : 40 }, { "_id" : 2, "host" : "127.0.0.4:31002", "priority" : 0, "slaveDelay" : 0 }, { "_id" : 3, "host" : "127.0.0.5:31003", "priority" : 0 } ] } Reconfiguring replica set... { "replSetReconfig" : { "_id" : "testReplSet", "members" : [ { "_id" : 0, "host" : "127.0.0.2:31000", "priority" : 1 }, { "_id" : 1, "host" : "127.0.0.3:31001", "priority" : 0, "slaveDelay" : 40 }, { "_id" : 2, "host" : "127.0.0.4:31002", "priority" : 0, "slaveDelay" : 0 }, { "_id" : 3, "host" : "127.0.0.5:31003", "priority" : 0 } ], "version" : 4 } } m31000| Thu Jan 17 16:45:27.610 [conn23] replSet replSetReconfig config object parses ok, 4 members specified m31003| Thu Jan 17 16:45:29.507 [rsHealthPoll] replset info 127.0.0.2:31000 heartbeat failed, retrying m31003| Thu Jan 17 16:45:36.909 [conn7] end connection 127.0.0.4:44491 (3 connections now open) m31003| Thu Jan 17 16:45:36.909 [initandlisten] connection accepted from 127.0.0.4:41165 #9 (4 connections now open) m31002| Thu Jan 17 16:45:37.520 [conn10] end connection 127.0.0.5:37949 (3 connections now open) m31002| Thu Jan 17 16:45:37.520 [initandlisten] connection accepted from 127.0.0.5:40687 #13 (4 connections now open) m31000| Thu Jan 17 16:45:37.607 [conn23] replSet cmufcc requestHeartbeat 127.0.0.5:31003 : 9001 socket exception [6] server [127.0.0.5:31003] m31000| Thu Jan 17 16:45:37.607 [conn23] replSet replSetReconfig [2] m31000| Thu Jan 17 16:45:37.607 [conn23] replSet info saving a newer config version to local.system.replset m31000| Thu Jan 17 16:45:37.620 [conn23] replSet saveConfigLocally done m31000| Thu Jan 17 16:45:37.620 [conn23] replSet relinquishing primary state m31000| Thu Jan 17 16:45:37.620 [conn23] replSet SECONDARY m31000| Thu Jan 17 16:45:37.620 [conn23] replSet closing client sockets after relinquishing primary m31000| Thu Jan 17 16:45:37.620 [conn23] replSet PRIMARY m31000| Thu Jan 17 16:45:37.620 [conn23] replSet replSetReconfig new config saved locally Thu Jan 17 16:45:37.620 DBClientCursor::init call() failed m31000| Thu Jan 17 16:45:37.620 [conn28] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.4:34777] m31000| Thu Jan 17 16:45:37.620 [conn23] command admin.$cmd command: { replSetReconfig: { _id: "testReplSet", members: [ { _id: 0.0, host: "127.0.0.2:31000", priority: 1.0 }, { _id: 1.0, host: "127.0.0.3:31001", priority: 0.0, slaveDelay: 40.0 }, { _id: 2.0, host: "127.0.0.4:31002", priority: 0.0, slaveDelay: 0.0 }, { _id: 3.0, host: "127.0.0.5:31003", priority: 0.0 } ], version: 4.0 } } ntoreturn:1 keyUpdates:0 locks(micros) W:12489 reslen:71 10010ms m31000| Thu Jan 17 16:45:37.620 [conn27] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.3:42212] Caught exception error doing query: failed, this is normal after reconfig. m31000| Thu Jan 17 16:45:37.620 [conn23] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:56374] m31000| Thu Jan 17 16:45:37.620 [rsHealthPoll] replSet member 127.0.0.4:31002 is up m31000| Thu Jan 17 16:45:37.620 [rsHealthPoll] replSet member 127.0.0.3:31001 is up m31000| Thu Jan 17 16:45:37.620 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state SECONDARY m31000| Thu Jan 17 16:45:37.620 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state SECONDARY m31001| Thu Jan 17 16:45:37.620 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000 m31002| Thu Jan 17 16:45:37.620 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000 Thu Jan 17 16:45:37.620 trying reconnect to 127.0.0.2:31000 m31000| Thu Jan 17 16:45:37.621 [initandlisten] connection accepted from 127.0.0.1:56380 #31 (7 connections now open) Thu Jan 17 16:45:37.621 reconnect 127.0.0.2:31000 ok { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:37Z"), "myState" : 2, "syncingTo" : "127.0.0.3:31001", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:29Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 34, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:37Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 34, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:37Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 84, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "errmsg" : "syncing to: 127.0.0.3:31001", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:37Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 86, "optime" : { "t" : 1358459137000, "i" : 2 }, "optimeDate" : ISODate("2013-01-17T21:45:37Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 0, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:37Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 0, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:37Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.3:31001, it : 1, expected : 127.0.0.4:31002 m31003| Thu Jan 17 16:45:38.247 [conn8] end connection 127.0.0.3:46598 (3 connections now open) m31003| Thu Jan 17 16:45:38.247 [initandlisten] connection accepted from 127.0.0.3:46106 #10 (5 connections now open) m31001| Thu Jan 17 16:45:38.247 [rsMgr] replset msgReceivedNewConfig version: version: 4 m31001| Thu Jan 17 16:45:38.247 [rsMgr] replSet info saving a newer config version to local.system.replset m31001| Thu Jan 17 16:45:38.269 [rsMgr] replSet saveConfigLocally done m31001| Thu Jan 17 16:45:38.269 [rsMgr] replSet replSetReconfig new config saved locally m31001| Thu Jan 17 16:45:38.269 [rsHealthPoll] replSet member 127.0.0.4:31002 is up m31001| Thu Jan 17 16:45:38.269 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state SECONDARY m31001| Thu Jan 17 16:45:38.269 [rsHealthPoll] replSet member 127.0.0.5:31003 is up m31001| Thu Jan 17 16:45:38.269 [rsHealthPoll] replSet member 127.0.0.2:31000 is up m31001| Thu Jan 17 16:45:38.269 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state SECONDARY m31001| Thu Jan 17 16:45:38.269 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state PRIMARY m31000| Thu Jan 17 16:45:38.276 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31000| Thu Jan 17 16:45:38.276 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31000| Thu Jan 17 16:45:38.910 [conn25] end connection 127.0.0.4:38141 (6 connections now open) m31002| Thu Jan 17 16:45:38.910 [rsMgr] replset msgReceivedNewConfig version: version: 4 m31000| Thu Jan 17 16:45:38.910 [initandlisten] connection accepted from 127.0.0.4:33871 #32 (7 connections now open) m31002| Thu Jan 17 16:45:38.910 [rsMgr] replSet info saving a newer config version to local.system.replset m31002| Thu Jan 17 16:45:38.916 [rsMgr] replSet saveConfigLocally done m31002| Thu Jan 17 16:45:38.916 [rsMgr] replSet replSetReconfig new config saved locally m31002| Thu Jan 17 16:45:38.916 [rsMgr] replset msgReceivedNewConfig version: version: 4 m31002| Thu Jan 17 16:45:38.916 [rsMgr] replSet info msgReceivedNewConfig but version isn't higher 4 4 m31002| Thu Jan 17 16:45:38.916 [rsHealthPoll] replSet member 127.0.0.3:31001 is up m31002| Thu Jan 17 16:45:38.916 [rsHealthPoll] replSet member 127.0.0.5:31003 is up m31002| Thu Jan 17 16:45:38.916 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state SECONDARY m31002| Thu Jan 17 16:45:38.916 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state SECONDARY m31002| Thu Jan 17 16:45:38.916 [rsHealthPoll] replSet member 127.0.0.2:31000 is up m31002| Thu Jan 17 16:45:38.916 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state PRIMARY m31003| Thu Jan 17 16:45:39.520 [rsMgr] replset msgReceivedNewConfig version: version: 4 m31003| Thu Jan 17 16:45:39.520 [rsMgr] replSet info saving a newer config version to local.system.replset m31003| Thu Jan 17 16:45:39.543 [rsMgr] replSet saveConfigLocally done m31003| Thu Jan 17 16:45:39.543 [rsMgr] replSet replSetReconfig new config saved locally m31003| Thu Jan 17 16:45:39.543 [rsMgr] replset msgReceivedNewConfig version: version: 4 m31003| Thu Jan 17 16:45:39.543 [rsMgr] replSet info msgReceivedNewConfig but version isn't higher 4 4 m31003| Thu Jan 17 16:45:39.543 [rsHealthPoll] replSet member 127.0.0.4:31002 is up m31003| Thu Jan 17 16:45:39.543 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state SECONDARY m31003| Thu Jan 17 16:45:39.543 [rsHealthPoll] replSet member 127.0.0.3:31001 is up m31003| Thu Jan 17 16:45:39.543 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state SECONDARY m31001| Thu Jan 17 16:45:40.247 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31001| Thu Jan 17 16:45:40.247 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31001| Thu Jan 17 16:45:40.247 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31002| Thu Jan 17 16:45:40.910 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31002| Thu Jan 17 16:45:40.910 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31002| Thu Jan 17 16:45:40.910 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31003| Thu Jan 17 16:45:41.520 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31003| Thu Jan 17 16:45:41.521 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31003| Thu Jan 17 16:45:42.255 [conn2] end connection 127.0.0.2:35613 (3 connections now open) m31002| Thu Jan 17 16:45:42.271 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.2:31000 m31000| Thu Jan 17 16:45:42.271 [conn21] end connection 127.0.0.4:55227 (6 connections now open) m31000| Thu Jan 17 16:45:42.271 [initandlisten] connection accepted from 127.0.0.4:52251 #33 (7 connections now open) m31002| Thu Jan 17 16:45:42.272 [FileAllocator] allocating new datafile /data/db/testReplSet-2/foo.ns, filling with zeroes... m31002| Thu Jan 17 16:45:42.421 [FileAllocator] done allocating datafile /data/db/testReplSet-2/foo.ns, size: 16MB, took 0.148 secs m31002| Thu Jan 17 16:45:42.421 [FileAllocator] allocating new datafile /data/db/testReplSet-2/foo.0, filling with zeroes... m31003| Thu Jan 17 16:45:42.507 [rsHealthPoll] couldn't connect to 127.0.0.2:31000: couldn't connect to server 127.0.0.2:31000 m31002| Thu Jan 17 16:45:42.562 [FileAllocator] done allocating datafile /data/db/testReplSet-2/foo.0, size: 16MB, took 0.141 secs m31002| Thu Jan 17 16:45:42.564 [repl writer worker 1] build index foo.bar { _id: 1 } m31002| Thu Jan 17 16:45:42.564 [repl writer worker 1] build index done. scanned 0 total records. 0 secs { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:42Z"), "myState" : 2, "syncingTo" : "127.0.0.3:31001", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 3, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:41Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:45:42Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 3, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:41Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 89, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "errmsg" : "syncing to: 127.0.0.3:31001", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:42Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 91, "optime" : { "t" : 1358459142000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:42Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 5, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:41Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:45:42Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 5, "optime" : { "t" : 1358459084000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:44:44Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:41Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.3:31001, it : 1, expected : 127.0.0.4:31002 m31000| Thu Jan 17 16:45:43.507 [conn7] end connection 127.0.0.5:46279 (6 connections now open) m31002| Thu Jan 17 16:45:45.621 [conn11] end connection 127.0.0.2:36957 (3 connections now open) m31002| Thu Jan 17 16:45:45.622 [initandlisten] connection accepted from 127.0.0.2:53486 #14 (4 connections now open) m31002| Thu Jan 17 16:45:46.270 [conn12] end connection 127.0.0.3:36137 (3 connections now open) m31002| Thu Jan 17 16:45:46.271 [initandlisten] connection accepted from 127.0.0.3:33882 #15 (4 connections now open) m31001| Thu Jan 17 16:45:46.918 [conn12] end connection 127.0.0.4:40761 (5 connections now open) m31001| Thu Jan 17 16:45:46.918 [initandlisten] connection accepted from 127.0.0.4:58030 #16 (6 connections now open) m31001| Thu Jan 17 16:45:47.544 [conn13] end connection 127.0.0.5:38854 (5 connections now open) m31001| Thu Jan 17 16:45:47.545 [initandlisten] connection accepted from 127.0.0.5:56226 #17 (6 connections now open) m31000| Thu Jan 17 16:45:47.607 [rsHealthPoll] couldn't connect to 127.0.0.5:31003: couldn't connect to server 127.0.0.5:31003 m31002| Thu Jan 17 16:45:47.620 [rsBackgroundSync] replSet syncing to: 127.0.0.2:31000 m31001| Thu Jan 17 16:45:47.620 [rsBackgroundSync] replSet syncing to: 127.0.0.2:31000 m31000| Thu Jan 17 16:45:47.620 [initandlisten] connection accepted from 127.0.0.3:44714 #34 (7 connections now open) m31000| Thu Jan 17 16:45:47.620 [initandlisten] connection accepted from 127.0.0.4:37227 #35 (8 connections now open) m31002| Thu Jan 17 16:45:47.622 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.2:31000 m31000| Thu Jan 17 16:45:47.622 [conn33] end connection 127.0.0.4:52251 (7 connections now open) m31000| Thu Jan 17 16:45:47.622 [initandlisten] connection accepted from 127.0.0.4:43252 #36 (8 connections now open) { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:47Z"), "myState" : 2, "syncingTo" : "127.0.0.3:31001", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 8, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:47Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 8, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:47Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 94, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "errmsg" : "syncing to: 127.0.0.3:31001", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:47Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 96, "optime" : { "t" : 1358459147000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:47Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 10, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:47Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 10, "optime" : { "t" : 1358459137000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:37Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:47Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.3:31001, it : 1, expected : 127.0.0.4:31002 m31003| Thu Jan 17 16:45:52.507 [rsHealthPoll] couldn't connect to 127.0.0.2:31000: couldn't connect to server 127.0.0.2:31000 { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:52Z"), "myState" : 2, "syncingTo" : "127.0.0.3:31001", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 13, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:51Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:45:52Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 13, "optime" : { "t" : 1358459147000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:47Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:51Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 99, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "errmsg" : "syncing to: 127.0.0.3:31001", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:52Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 101, "optime" : { "t" : 1358459152000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:52Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 15, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:51Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:45:52Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 15, "optime" : { "t" : 1358459147000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:47Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:51Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.3:31001, it : 1, expected : 127.0.0.4:31002 m31000| Thu Jan 17 16:45:57.607 [rsHealthPoll] couldn't connect to 127.0.0.5:31003: couldn't connect to server 127.0.0.5:31003 { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:57Z"), "myState" : 2, "syncingTo" : "127.0.0.3:31001", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 18, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:57Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 18, "optime" : { "t" : 1358459152000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:52Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:57Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 104, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "errmsg" : "syncing to: 127.0.0.3:31001", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:45:57Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 106, "optime" : { "t" : 1358459157000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:57Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 20, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:57Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 20, "optime" : { "t" : 1358459152000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:52Z"), "lastHeartbeat" : ISODate("2013-01-17T21:45:57Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.3:31001, it : 1, expected : 127.0.0.4:31002 m31003| Thu Jan 17 16:46:00.273 [conn10] end connection 127.0.0.3:46106 (2 connections now open) m31003| Thu Jan 17 16:46:00.273 [initandlisten] connection accepted from 127.0.0.3:43534 #11 (3 connections now open) m31003| Thu Jan 17 16:46:00.921 [conn9] end connection 127.0.0.4:41165 (2 connections now open) m31003| Thu Jan 17 16:46:00.921 [initandlisten] connection accepted from 127.0.0.4:59360 #12 (3 connections now open) m31002| Thu Jan 17 16:46:01.547 [conn13] end connection 127.0.0.5:40687 (3 connections now open) m31002| Thu Jan 17 16:46:01.547 [initandlisten] connection accepted from 127.0.0.5:33363 #16 (4 connections now open) m31001| Thu Jan 17 16:46:01.624 [conn11] end connection 127.0.0.2:52181 (5 connections now open) m31001| Thu Jan 17 16:46:01.624 [initandlisten] connection accepted from 127.0.0.2:53290 #18 (6 connections now open) m31000| Thu Jan 17 16:46:02.274 [conn26] end connection 127.0.0.3:53118 (7 connections now open) m31000| Thu Jan 17 16:46:02.274 [initandlisten] connection accepted from 127.0.0.3:59764 #37 (8 connections now open) m31003| Thu Jan 17 16:46:02.507 [rsHealthPoll] Client::shutdown not called: rsHealthPoll { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:46:02Z"), "myState" : 2, "syncingTo" : "127.0.0.3:31001", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:02Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 23, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:01Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:46:02Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 23, "optime" : { "t" : 1358459157000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:57Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:01Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 109, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "errmsg" : "syncing to: 127.0.0.3:31001", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:46:02Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 111, "optime" : { "t" : 1358459162000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:02Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 25, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:01Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:46:02Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 25, "optime" : { "t" : 1358459157000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:57Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:01Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.3:31001, it : 1, expected : 127.0.0.4:31002 m31000| Thu Jan 17 16:46:02.921 [conn32] end connection 127.0.0.4:33871 (7 connections now open) m31000| Thu Jan 17 16:46:02.921 [initandlisten] connection accepted from 127.0.0.4:41259 #38 (8 connections now open) m31000| Thu Jan 17 16:46:07.607 [rsHealthPoll] Client::shutdown not called: rsHealthPoll { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:46:07Z"), "myState" : 2, "syncingTo" : "127.0.0.3:31001", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:02Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 28, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:07Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 28, "optime" : { "t" : 1358459162000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:02Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:07Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 114, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "errmsg" : "syncing to: 127.0.0.3:31001", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:46:07Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 116, "optime" : { "t" : 1358459167000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:07Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 30, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:07Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 30, "optime" : { "t" : 1358459162000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:02Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:07Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:07Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.3:31001, it : 1, expected : 127.0.0.4:31002 m31003| Thu Jan 17 16:46:08.220 [rsBackgroundSync] replSet syncing to: 127.0.0.4:31002 m31001| Thu Jan 17 16:46:08.220 [conn14] end connection 127.0.0.5:58203 (5 connections now open) m31002| Thu Jan 17 16:46:08.220 [initandlisten] connection accepted from 127.0.0.5:49429 #17 (5 connections now open) m31003| Thu Jan 17 16:46:08.221 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.4:31002 m31001| Thu Jan 17 16:46:08.222 [conn15] end connection 127.0.0.5:34132 (4 connections now open) m31002| Thu Jan 17 16:46:08.222 [initandlisten] connection accepted from 127.0.0.5:40647 #18 (6 connections now open) m31000| Thu Jan 17 16:46:08.223 [initandlisten] connection accepted from 127.0.0.4:52619 #39 (9 connections now open) m31002| Thu Jan 17 16:46:08.223 [rsGhostSync] handshake between 3 and 127.0.0.2:31000 m31002| Thu Jan 17 16:46:09.224 [slaveTracking] build index local.slaves { _id: 1 } m31002| Thu Jan 17 16:46:09.224 [slaveTracking] build index done. scanned 0 total records. 0 secs m31003| Thu Jan 17 16:46:12.507 [rsHealthPoll] couldn't connect to 127.0.0.2:31000: couldn't connect to server 127.0.0.2:31000 { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:46:12Z"), "myState" : 2, "syncingTo" : "127.0.0.4:31002", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:02Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 33, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:11Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:46:12Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 33, "optime" : { "t" : 1358459167000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:07Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:11Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 119, "optime" : { "t" : 1358459167000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:07Z"), "errmsg" : "syncing to: 127.0.0.4:31002", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:46:12Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 121, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 35, "optime" : { "t" : 1358459127000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:45:27Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:11Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:46:12Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 35, "optime" : { "t" : 1358459167000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:07Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:11Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:07Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.4:31002, it : 1, expected : 127.0.0.4:31002 { "_id" : "testReplSet", "members" : [ { "_id" : 0, "host" : "127.0.0.2:31000", "priority" : 1 }, { "_id" : 1, "host" : "127.0.0.3:31001", "priority" : 0, "slaveDelay" : 0 }, { "_id" : 2, "host" : "127.0.0.4:31002", "priority" : 0, "slaveDelay" : 40 }, { "_id" : 3, "host" : "127.0.0.5:31003", "priority" : 0 } ] } Reconfiguring replica set... { "replSetReconfig" : { "_id" : "testReplSet", "members" : [ { "_id" : 0, "host" : "127.0.0.2:31000", "priority" : 1 }, { "_id" : 1, "host" : "127.0.0.3:31001", "priority" : 0, "slaveDelay" : 0 }, { "_id" : 2, "host" : "127.0.0.4:31002", "priority" : 0, "slaveDelay" : 40 }, { "_id" : 3, "host" : "127.0.0.5:31003", "priority" : 0 } ], "version" : 5 } } m31000| Thu Jan 17 16:46:12.676 [conn31] replSet replSetReconfig config object parses ok, 4 members specified m31002| Thu Jan 17 16:46:15.627 [conn14] end connection 127.0.0.2:53486 (5 connections now open) m31002| Thu Jan 17 16:46:15.627 [initandlisten] connection accepted from 127.0.0.2:36058 #19 (6 connections now open) m31002| Thu Jan 17 16:46:16.276 [conn15] end connection 127.0.0.3:33882 (5 connections now open) m31002| Thu Jan 17 16:46:16.276 [initandlisten] connection accepted from 127.0.0.3:46555 #20 (6 connections now open) m31001| Thu Jan 17 16:46:16.923 [conn16] end connection 127.0.0.4:58030 (3 connections now open) m31001| Thu Jan 17 16:46:16.924 [initandlisten] connection accepted from 127.0.0.4:59315 #19 (4 connections now open) m31001| Thu Jan 17 16:46:17.550 [conn17] end connection 127.0.0.5:56226 (3 connections now open) m31001| Thu Jan 17 16:46:17.550 [initandlisten] connection accepted from 127.0.0.5:41732 #20 (4 connections now open) m31000| Thu Jan 17 16:46:17.607 [rsHealthPoll] couldn't connect to 127.0.0.5:31003: couldn't connect to server 127.0.0.5:31003 m31001| Thu Jan 17 16:46:17.622 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.2:31000 m31000| Thu Jan 17 16:46:17.622 [conn29] end connection 127.0.0.3:57985 (8 connections now open) m31000| Thu Jan 17 16:46:17.622 [initandlisten] connection accepted from 127.0.0.3:47332 #40 (9 connections now open) m31003| Thu Jan 17 16:46:19.551 [rsMgr] replSet I don't see a primary and I can't elect myself m31003| Thu Jan 17 16:46:22.507 [rsHealthPoll] couldn't connect to 127.0.0.2:31000: couldn't connect to server 127.0.0.2:31000 m31000| Thu Jan 17 16:46:27.607 [rsHealthPoll] couldn't connect to 127.0.0.5:31003: couldn't connect to server 127.0.0.5:31003 m31003| Thu Jan 17 16:46:30.279 [conn11] end connection 127.0.0.3:43534 (2 connections now open) m31003| Thu Jan 17 16:46:30.279 [initandlisten] connection accepted from 127.0.0.3:40859 #13 (3 connections now open) m31003| Thu Jan 17 16:46:30.926 [conn12] end connection 127.0.0.4:59360 (2 connections now open) m31003| Thu Jan 17 16:46:30.926 [initandlisten] connection accepted from 127.0.0.4:47019 #14 (3 connections now open) m31002| Thu Jan 17 16:46:31.553 [conn16] end connection 127.0.0.5:33363 (5 connections now open) m31002| Thu Jan 17 16:46:31.553 [initandlisten] connection accepted from 127.0.0.5:50568 #21 (6 connections now open) m31001| Thu Jan 17 16:46:31.630 [conn18] end connection 127.0.0.2:53290 (3 connections now open) m31001| Thu Jan 17 16:46:31.630 [initandlisten] connection accepted from 127.0.0.2:47925 #21 (4 connections now open) m31000| Thu Jan 17 16:46:32.280 [conn37] end connection 127.0.0.3:59764 (8 connections now open) m31000| Thu Jan 17 16:46:32.280 [initandlisten] connection accepted from 127.0.0.3:49871 #41 (9 connections now open) m31003| Thu Jan 17 16:46:32.507 [rsHealthPoll] replSet info 127.0.0.2:31000 is down (or slow to respond): m31003| Thu Jan 17 16:46:32.507 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state DOWN m31000| Thu Jan 17 16:46:32.926 [conn38] end connection 127.0.0.4:41259 (8 connections now open) m31000| Thu Jan 17 16:46:32.926 [initandlisten] connection accepted from 127.0.0.4:34478 #42 (9 connections now open) m31000| Thu Jan 17 16:46:37.607 [rsHealthPoll] replSet info 127.0.0.5:31003 is down (or slow to respond): m31000| Thu Jan 17 16:46:37.607 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state DOWN m31003| Thu Jan 17 16:46:44.507 [rsHealthPoll] couldn't connect to 127.0.0.2:31000: couldn't connect to server 127.0.0.2:31000 m31002| Thu Jan 17 16:46:45.633 [conn19] end connection 127.0.0.2:36058 (5 connections now open) m31002| Thu Jan 17 16:46:45.633 [initandlisten] connection accepted from 127.0.0.2:49891 #22 (6 connections now open) m31002| Thu Jan 17 16:46:46.282 [conn20] end connection 127.0.0.3:46555 (5 connections now open) m31002| Thu Jan 17 16:46:46.282 [initandlisten] connection accepted from 127.0.0.3:33810 #23 (6 connections now open) m31001| Thu Jan 17 16:46:46.929 [conn19] end connection 127.0.0.4:59315 (3 connections now open) m31001| Thu Jan 17 16:46:46.929 [initandlisten] connection accepted from 127.0.0.4:37468 #22 (4 connections now open) m31001| Thu Jan 17 16:46:47.556 [conn20] end connection 127.0.0.5:41732 (3 connections now open) m31001| Thu Jan 17 16:46:47.556 [initandlisten] connection accepted from 127.0.0.5:48459 #23 (4 connections now open) m31000| Thu Jan 17 16:46:47.607 [conn31] couldn't connect to 127.0.0.5:31003: couldn't connect to server 127.0.0.5:31003 m31003| Thu Jan 17 16:46:54.507 [rsHealthPoll] couldn't connect to 127.0.0.2:31000: couldn't connect to server 127.0.0.2:31000 m31000| Thu Jan 17 16:46:57.607 [conn31] replSet cmufcc requestHeartbeat 127.0.0.5:31003 : 9001 socket exception [6] server [127.0.0.5:31003] m31000| Thu Jan 17 16:46:57.607 [conn31] replSet replSetReconfig [2] m31000| Thu Jan 17 16:46:57.607 [conn31] replSet info saving a newer config version to local.system.replset m31000| Thu Jan 17 16:46:57.641 [conn31] replSet saveConfigLocally done m31000| Thu Jan 17 16:46:57.641 [conn31] replSet relinquishing primary state m31000| Thu Jan 17 16:46:57.641 [conn31] replSet SECONDARY m31000| Thu Jan 17 16:46:57.641 [conn31] replSet closing client sockets after relinquishing primary Thu Jan 17 16:46:57.641 DBClientCursor::init call() failed m31001| Thu Jan 17 16:46:57.642 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: 127.0.0.2:31000 m31002| Thu Jan 17 16:46:57.642 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000 m31001| Thu Jan 17 16:46:57.642 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000 m31000| Thu Jan 17 16:46:57.641 [conn31] replSet PRIMARY m31000| Thu Jan 17 16:46:57.641 [conn31] replSet replSetReconfig new config saved locally m31000| Thu Jan 17 16:46:57.641 [conn35] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.4:37227] m31000| Thu Jan 17 16:46:57.641 [conn34] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.3:44714] m31000| Thu Jan 17 16:46:57.641 [conn40] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.3:47332] m31000| Thu Jan 17 16:46:57.642 [conn31] command admin.$cmd command: { replSetReconfig: { _id: "testReplSet", members: [ { _id: 0.0, host: "127.0.0.2:31000", priority: 1.0 }, { _id: 1.0, host: "127.0.0.3:31001", priority: 0.0, slaveDelay: 0.0 }, { _id: 2.0, host: "127.0.0.4:31002", priority: 0.0, slaveDelay: 40.0 }, { _id: 3.0, host: "127.0.0.5:31003", priority: 0.0 } ], version: 5.0 } } ntoreturn:1 keyUpdates:0 locks(micros) W:34096 reslen:71 44965ms m31000| Thu Jan 17 16:46:57.642 [conn31] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:56380] m31000| Thu Jan 17 16:46:57.642 [rsHealthPoll] replSet member 127.0.0.4:31002 is up m31000| Thu Jan 17 16:46:57.642 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state SECONDARY m31000| Thu Jan 17 16:46:57.642 [rsHealthPoll] replSet member 127.0.0.3:31001 is up Caught exception error doing query: failed, this is normal after reconfig. m31000| Thu Jan 17 16:46:57.642 [rsMgr] can't see a majority of the set, relinquishing primary m31000| Thu Jan 17 16:46:57.642 [rsMgr] replSet relinquishing primary state m31000| Thu Jan 17 16:46:57.642 [rsMgr] replSet SECONDARY m31000| Thu Jan 17 16:46:57.642 [rsMgr] replSet closing client sockets after relinquishing primary m31000| Thu Jan 17 16:46:57.642 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state SECONDARY Thu Jan 17 16:46:57.642 trying reconnect to 127.0.0.2:31000 Thu Jan 17 16:46:57.642 reconnect 127.0.0.2:31000 ok m31000| Thu Jan 17 16:46:57.642 [initandlisten] connection accepted from 127.0.0.1:56391 #43 (6 connections now open) { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:46:57Z"), "myState" : 2, "syncingTo" : "127.0.0.4:31002", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:32Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 78, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:57Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 78, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:57Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 164, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "errmsg" : "syncing to: 127.0.0.4:31002", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:46:57Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 166, "optime" : { "t" : 1358459217000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:57Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 0, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:57Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 0, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:57Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.4:31002, it : 2, expected : 127.0.0.3:31001 m31001| Thu Jan 17 16:46:58.285 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state SECONDARY m31001| Thu Jan 17 16:46:58.285 [rsMgr] replset msgReceivedNewConfig version: version: 5 m31001| Thu Jan 17 16:46:58.285 [rsMgr] replSet info saving a newer config version to local.system.replset m31001| Thu Jan 17 16:46:58.296 [rsMgr] replSet saveConfigLocally done m31001| Thu Jan 17 16:46:58.296 [rsMgr] replSet replSetReconfig new config saved locally m31001| Thu Jan 17 16:46:58.296 [rsHealthPoll] replSet member 127.0.0.4:31002 is up m31001| Thu Jan 17 16:46:58.296 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state SECONDARY m31001| Thu Jan 17 16:46:58.296 [rsHealthPoll] replSet member 127.0.0.5:31003 is up m31001| Thu Jan 17 16:46:58.296 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state SECONDARY m31001| Thu Jan 17 16:46:58.296 [rsMgr] replSet I don't see a primary and I can't elect myself m31001| Thu Jan 17 16:46:58.296 [rsHealthPoll] replSet member 127.0.0.2:31000 is up m31001| Thu Jan 17 16:46:58.296 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state SECONDARY m31002| Thu Jan 17 16:46:58.931 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state SECONDARY m31002| Thu Jan 17 16:46:58.931 [rsMgr] replset msgReceivedNewConfig version: version: 5 m31002| Thu Jan 17 16:46:58.931 [rsMgr] replSet info saving a newer config version to local.system.replset m31002| Thu Jan 17 16:46:58.932 [rsMgr] replSet saveConfigLocally done m31002| Thu Jan 17 16:46:58.933 [rsMgr] replSet replSetReconfig new config saved locally m31002| Thu Jan 17 16:46:58.933 [rsMgr] replset msgReceivedNewConfig version: version: 5 m31002| Thu Jan 17 16:46:58.933 [rsMgr] replSet info msgReceivedNewConfig but version isn't higher 5 5 m31002| Thu Jan 17 16:46:58.933 [rsHealthPoll] replSet member 127.0.0.3:31001 is up m31002| Thu Jan 17 16:46:58.933 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state SECONDARY m31002| Thu Jan 17 16:46:58.933 [rsHealthPoll] replSet member 127.0.0.5:31003 is up m31002| Thu Jan 17 16:46:58.933 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state SECONDARY m31002| Thu Jan 17 16:46:58.933 [rsHealthPoll] replSet member 127.0.0.2:31000 is up m31002| Thu Jan 17 16:46:58.933 [rsMgr] replSet I don't see a primary and I can't elect myself m31002| Thu Jan 17 16:46:58.933 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state SECONDARY m31003| Thu Jan 17 16:46:59.558 [rsMgr] replset msgReceivedNewConfig version: version: 5 m31003| Thu Jan 17 16:46:59.558 [rsMgr] replSet info saving a newer config version to local.system.replset m31003| Thu Jan 17 16:46:59.559 [rsMgr] replSet saveConfigLocally done m31003| Thu Jan 17 16:46:59.559 [rsMgr] replSet replSetReconfig new config saved locally m31003| Thu Jan 17 16:46:59.560 [rsMgr] replset msgReceivedNewConfig version: version: 5 m31003| Thu Jan 17 16:46:59.560 [rsMgr] replSet info msgReceivedNewConfig but version isn't higher 5 5 m31003| Thu Jan 17 16:46:59.560 [rsHealthPoll] replSet member 127.0.0.4:31002 is up m31003| Thu Jan 17 16:46:59.560 [rsHealthPoll] replSet member 127.0.0.3:31001 is up m31003| Thu Jan 17 16:46:59.560 [rsHealthPoll] replSet member 127.0.0.4:31002 is now in state SECONDARY m31003| Thu Jan 17 16:46:59.560 [rsHealthPoll] replSet member 127.0.0.3:31001 is now in state SECONDARY m31000| Thu Jan 17 16:46:59.635 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31000| Thu Jan 17 16:46:59.635 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31001| Thu Jan 17 16:47:00.285 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31001| Thu Jan 17 16:47:00.285 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31001| Thu Jan 17 16:47:00.285 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31002| Thu Jan 17 16:47:00.931 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31002| Thu Jan 17 16:47:00.931 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31002| Thu Jan 17 16:47:00.931 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31003| Thu Jan 17 16:47:01.558 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31003| Thu Jan 17 16:47:01.559 [rsHealthPoll] Client::shutdown not called: rsHealthPoll { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:47:02Z"), "myState" : 2, "syncingTo" : "127.0.0.4:31002", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 3, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:47:01Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:47:02Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 3, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:47:01Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 169, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "errmsg" : "syncing to: 127.0.0.4:31002", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:47:02Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 171, "optime" : { "t" : 1358459217000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:57Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 5, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:57Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:47:02Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 5, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:57Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:47:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.4:31002, it : 2, expected : 127.0.0.3:31001 m31003| Thu Jan 17 16:47:04.507 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31000| Thu Jan 17 16:47:07.607 [rsHealthPoll] couldn't connect to 127.0.0.5:31003: couldn't connect to server 127.0.0.5:31003 m31002| Thu Jan 17 16:47:07.642 [rsBackgroundSync] replSet syncing to: 127.0.0.2:31000 m31001| Thu Jan 17 16:47:07.642 [rsBackgroundSync] replSet syncing to: 127.0.0.2:31000 m31000| Thu Jan 17 16:47:07.642 [initandlisten] connection accepted from 127.0.0.4:48902 #44 (7 connections now open) m31000| Thu Jan 17 16:47:07.642 [initandlisten] connection accepted from 127.0.0.3:46926 #45 (8 connections now open) m31001| Thu Jan 17 16:47:07.643 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.2:31000 m31000| Thu Jan 17 16:47:07.643 [initandlisten] connection accepted from 127.0.0.3:36901 #46 (9 connections now open) { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:47:07Z"), "myState" : 2, "syncingTo" : "127.0.0.4:31002", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:47:04Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 8, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:47:07Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 8, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:47:07Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 174, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "errmsg" : "syncing to: 127.0.0.4:31002", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:47:07Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 176, "optime" : { "t" : 1358459217000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:57Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 10, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:57Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:47:06Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 10, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:57Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:47:06Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.4:31002, it : 2, expected : 127.0.0.3:31001 m31003| Thu Jan 17 16:47:08.298 [conn13] end connection 127.0.0.3:40859 (2 connections now open) m31003| Thu Jan 17 16:47:08.298 [initandlisten] connection accepted from 127.0.0.3:54408 #15 (3 connections now open) m31003| Thu Jan 17 16:47:08.935 [conn14] end connection 127.0.0.4:47019 (2 connections now open) m31003| Thu Jan 17 16:47:08.935 [initandlisten] connection accepted from 127.0.0.4:47898 #16 (3 connections now open) m31002| Thu Jan 17 16:47:09.562 [conn21] end connection 127.0.0.5:50568 (5 connections now open) m31002| Thu Jan 17 16:47:09.562 [initandlisten] connection accepted from 127.0.0.5:34680 #24 (6 connections now open) m31001| Thu Jan 17 16:47:09.644 [conn21] end connection 127.0.0.2:47925 (3 connections now open) m31001| Thu Jan 17 16:47:09.644 [initandlisten] connection accepted from 127.0.0.2:45840 #24 (4 connections now open) m31000| Thu Jan 17 16:47:10.298 [conn41] end connection 127.0.0.3:49871 (8 connections now open) m31000| Thu Jan 17 16:47:10.298 [initandlisten] connection accepted from 127.0.0.3:59845 #47 (10 connections now open) m31000| Thu Jan 17 16:47:10.935 [conn42] end connection 127.0.0.4:34478 (8 connections now open) m31000| Thu Jan 17 16:47:10.935 [initandlisten] connection accepted from 127.0.0.4:42960 #48 (9 connections now open) { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:47:12Z"), "myState" : 2, "syncingTo" : "127.0.0.4:31002", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:47:04Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 13, "optime" : { "t" : 1358459217000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:57Z"), "lastHeartbeat" : ISODate("2013-01-17T21:47:11Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:47:12Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 13, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:47:11Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 179, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "errmsg" : "syncing to: 127.0.0.4:31002", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:47:12Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 181, "optime" : { "t" : 1358459217000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:57Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 15, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:57Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:47:12Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 15, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:57Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:47:10Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.4:31002, it : 2, expected : 127.0.0.3:31001 m31003| Thu Jan 17 16:47:13.369 [rsBackgroundSync] replSet syncing to: 127.0.0.3:31001 m31002| Thu Jan 17 16:47:13.369 [conn17] end connection 127.0.0.5:49429 (5 connections now open) m31001| Thu Jan 17 16:47:13.369 [initandlisten] connection accepted from 127.0.0.5:40268 #25 (5 connections now open) m31003| Thu Jan 17 16:47:13.370 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.3:31001 m31002| Thu Jan 17 16:47:13.370 [conn18] end connection 127.0.0.5:40647 (4 connections now open) m31001| Thu Jan 17 16:47:13.371 [initandlisten] connection accepted from 127.0.0.5:56344 #26 (6 connections now open) m31000| Thu Jan 17 16:47:13.371 [conn30] end connection 127.0.0.3:58723 (8 connections now open) m31001| Thu Jan 17 16:47:13.371 [rsGhostSync] Socket recv() errno:104 Connection reset by peer 127.0.0.2:31000 m31001| Thu Jan 17 16:47:13.371 [rsGhostSync] SocketException: remote: 127.0.0.2:31000 error: 9001 socket exception [1] server [127.0.0.2:31000] m31001| Thu Jan 17 16:47:13.372 [rsGhostSync] Socket flush send() errno:32 Broken pipe 127.0.0.2:31000 m31001| Thu Jan 17 16:47:13.372 [rsGhostSync] caught exception (socket exception [SEND_ERROR] for 127.0.0.2:31000) in destructor (~PiggyBackData) m31003| Thu Jan 17 16:47:14.507 [rsHealthPoll] couldn't connect to 127.0.0.2:31000: couldn't connect to server 127.0.0.2:31000 m31000| Thu Jan 17 16:47:17.607 [rsHealthPoll] couldn't connect to 127.0.0.5:31003: couldn't connect to server 127.0.0.5:31003 { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:47:17Z"), "myState" : 2, "syncingTo" : "127.0.0.3:31001", "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2013-01-17T21:47:04Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 18, "optime" : { "t" : 1358459217000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:57Z"), "lastHeartbeat" : ISODate("2013-01-17T21:47:17Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 18, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:47:17Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 184, "optime" : { "t" : 1358459217000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:57Z"), "errmsg" : "syncing to: 127.0.0.3:31001", "self" : true } ], "ok" : 1 } { "set" : "testReplSet", "date" : ISODate("2013-01-17T21:47:17Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "127.0.0.2:31000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 186, "optime" : { "t" : 1358459217000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:57Z"), "self" : true }, { "_id" : 1, "name" : "127.0.0.3:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 20, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:57Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:47:16Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 2, "name" : "127.0.0.4:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 20, "optime" : { "t" : 1358459172000, "i" : 1 }, "optimeDate" : ISODate("2013-01-17T21:46:12Z"), "lastHeartbeat" : ISODate("2013-01-17T21:46:57Z"), "lastHeartbeatRecv" : ISODate("2013-01-17T21:47:16Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000" }, { "_id" : 3, "name" : "127.0.0.5:31003", "health" : -1, "state" : 6, "stateStr" : "UNKNOWN", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } Syncing to: 127.0.0.3:31001, it : 2, expected : 127.0.0.3:31001 m31002| Thu Jan 17 16:47:23.646 [conn22] end connection 127.0.0.2:49891 (3 connections now open) m31002| Thu Jan 17 16:47:23.646 [initandlisten] connection accepted from 127.0.0.2:51664 #25 (4 connections now open) m31002| Thu Jan 17 16:47:24.301 [conn23] end connection 127.0.0.3:33810 (3 connections now open) m31002| Thu Jan 17 16:47:24.301 [initandlisten] connection accepted from 127.0.0.3:52086 #26 (4 connections now open) m31003| Thu Jan 17 16:47:24.507 [rsHealthPoll] couldn't connect to 127.0.0.2:31000: couldn't connect to server 127.0.0.2:31000 m31001| Thu Jan 17 16:47:24.937 [conn22] end connection 127.0.0.4:37468 (5 connections now open) m31001| Thu Jan 17 16:47:24.938 [initandlisten] connection accepted from 127.0.0.4:47183 #27 (6 connections now open) m31001| Thu Jan 17 16:47:25.565 [conn23] end connection 127.0.0.5:48459 (5 connections now open) m31001| Thu Jan 17 16:47:25.565 [initandlisten] connection accepted from 127.0.0.5:33503 #28 (6 connections now open) m31000| Thu Jan 17 16:47:27.607 [rsHealthPoll] couldn't connect to 127.0.0.5:31003: couldn't connect to server 127.0.0.5:31003 m31003| Thu Jan 17 16:47:33.566 [rsMgr] replSet I don't see a primary and I can't elect myself m31003| Thu Jan 17 16:47:34.507 [rsHealthPoll] replSet info 127.0.0.2:31000 is down (or slow to respond): m31003| Thu Jan 17 16:47:34.507 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state DOWN m31000| Thu Jan 17 16:47:37.607 [rsHealthPoll] replSet info 127.0.0.5:31003 is down (or slow to respond): m31000| Thu Jan 17 16:47:37.607 [rsHealthPoll] replSet member 127.0.0.5:31003 is now in state DOWN m31002| Thu Jan 17 16:47:37.643 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.2:31000 m31000| Thu Jan 17 16:47:37.643 [conn36] end connection 127.0.0.4:43252 (7 connections now open) m31000| Thu Jan 17 16:47:37.643 [initandlisten] connection accepted from 127.0.0.4:36724 #49 (8 connections now open) m31003| Thu Jan 17 16:47:38.303 [conn15] end connection 127.0.0.3:54408 (2 connections now open) m31003| Thu Jan 17 16:47:38.304 [initandlisten] connection accepted from 127.0.0.3:37046 #17 (3 connections now open) m31003| Thu Jan 17 16:47:38.940 [conn16] end connection 127.0.0.4:47898 (2 connections now open) m31003| Thu Jan 17 16:47:38.940 [initandlisten] connection accepted from 127.0.0.4:41570 #18 (3 connections now open) m31002| Thu Jan 17 16:47:39.567 [conn24] end connection 127.0.0.5:34680 (3 connections now open) m31002| Thu Jan 17 16:47:39.568 [initandlisten] connection accepted from 127.0.0.5:39104 #27 (5 connections now open) m31001| Thu Jan 17 16:47:39.650 [conn24] end connection 127.0.0.2:45840 (5 connections now open) m31001| Thu Jan 17 16:47:39.650 [initandlisten] connection accepted from 127.0.0.2:45344 #29 (6 connections now open) m31000| Thu Jan 17 16:47:40.304 [conn47] end connection 127.0.0.3:59845 (7 connections now open) m31000| Thu Jan 17 16:47:40.304 [initandlisten] connection accepted from 127.0.0.3:37767 #50 (8 connections now open) m31000| Thu Jan 17 16:47:40.941 [conn48] end connection 127.0.0.4:42960 (7 connections now open) m31000| Thu Jan 17 16:47:40.941 [initandlisten] connection accepted from 127.0.0.4:55183 #51 (8 connections now open) m31003| Thu Jan 17 16:47:46.507 [rsHealthPoll] couldn't connect to 127.0.0.2:31000: couldn't connect to server 127.0.0.2:31000 m31000| Thu Jan 17 16:47:47.607 [rsHealthPoll] couldn't connect to 127.0.0.5:31003: couldn't connect to server 127.0.0.5:31003 m31002| Thu Jan 17 16:47:53.652 [conn25] end connection 127.0.0.2:51664 (3 connections now open) m31002| Thu Jan 17 16:47:53.652 [initandlisten] connection accepted from 127.0.0.2:49246 #28 (4 connections now open) m31002| Thu Jan 17 16:47:54.307 [conn26] end connection 127.0.0.3:52086 (3 connections now open) m31002| Thu Jan 17 16:47:54.307 [initandlisten] connection accepted from 127.0.0.3:44029 #29 (4 connections now open) m31001| Thu Jan 17 16:47:54.943 [conn27] end connection 127.0.0.4:47183 (5 connections now open) m31001| Thu Jan 17 16:47:54.943 [initandlisten] connection accepted from 127.0.0.4:47142 #30 (6 connections now open) m31001| Thu Jan 17 16:47:55.570 [conn28] end connection 127.0.0.5:33503 (5 connections now open) m31001| Thu Jan 17 16:47:55.570 [initandlisten] connection accepted from 127.0.0.5:36309 #31 (6 connections now open) m31003| Thu Jan 17 16:47:56.507 [rsHealthPoll] couldn't connect to 127.0.0.2:31000: couldn't connect to server 127.0.0.2:31000 m31000| Thu Jan 17 16:47:57.607 [rsHealthPoll] Client::shutdown not called: rsHealthPoll m31000| Thu Jan 17 16:48:07.607 [MultiCommandJob] couldn't connect to 127.0.0.5:31003: couldn't connect to server 127.0.0.5:31003 m31003| Thu Jan 17 16:48:08.309 [conn17] end connection 127.0.0.3:37046 (2 connections now open) m31003| Thu Jan 17 16:48:08.309 [initandlisten] connection accepted from 127.0.0.3:39383 #19 (3 connections now open) m31003| Thu Jan 17 16:48:08.946 [conn18] end connection 127.0.0.4:41570 (2 connections now open) m31003| Thu Jan 17 16:48:08.946 [initandlisten] connection accepted from 127.0.0.4:37647 #20 (3 connections now open) m31002| Thu Jan 17 16:48:09.573 [conn27] end connection 127.0.0.5:39104 (3 connections now open) m31002| Thu Jan 17 16:48:09.573 [initandlisten] connection accepted from 127.0.0.5:48296 #30 (4 connections now open) m31001| Thu Jan 17 16:48:09.655 [conn29] end connection 127.0.0.2:45344 (5 connections now open) m31001| Thu Jan 17 16:48:09.655 [initandlisten] connection accepted from 127.0.0.2:60758 #32 (6 connections now open) m31000| Thu Jan 17 16:48:10.310 [conn50] end connection 127.0.0.3:37767 (7 connections now open) m31000| Thu Jan 17 16:48:10.310 [initandlisten] connection accepted from 127.0.0.3:59197 #52 (8 connections now open) m31000| Thu Jan 17 16:48:10.946 [conn51] end connection 127.0.0.4:55183 (7 connections now open) m31000| Thu Jan 17 16:48:10.946 [initandlisten] connection accepted from 127.0.0.4:57819 #53 (8 connections now open) m31001| Thu Jan 17 16:48:16.311 [rsMgr] replSet I don't see a primary and I can't elect myself m31000| Thu Jan 17 16:48:17.607 [rsMgr] not electing self, 127.0.0.4:31002 would veto with '127.0.0.2:31000 is trying to elect itself but 127.0.0.2:31000 is already primary and more up-to-date' m31003| Thu Jan 17 16:48:18.507 [rsHealthPoll] couldn't connect to 127.0.0.2:31000: couldn't connect to server 127.0.0.2:31000 Thu Jan 17 16:48:19.736 javascript execution failed src/mongo/shell/utils.js:1261 [Finding master] timed out after 60000ms ( 31 tries ) pts['desc'] + ']' + " timed out after " + timeout + "ms ( " + tries + " tries ^ failed to load: /home/gregorv/Workspaces/10Gen/mongo/jstests/replsets/sync_change_source.js m31000| Thu Jan 17 16:48:19.736 got signal 15 (Terminated), will terminate after current cmd ends m31000| Thu Jan 17 16:48:19.736 [interruptThread] now exiting m31000| Thu Jan 17 16:48:19.737 dbexit: m31000| Thu Jan 17 16:48:19.737 [interruptThread] shutdown: going to close listening sockets... m31000| Thu Jan 17 16:48:19.737 [interruptThread] closing listening socket: 13 m31000| Thu Jan 17 16:48:19.737 [interruptThread] closing listening socket: 14 m31000| Thu Jan 17 16:48:19.737 [interruptThread] shutdown: going to flush diaglog... m31000| Thu Jan 17 16:48:19.737 [interruptThread] shutdown: going to close sockets... m31000| Thu Jan 17 16:48:19.737 [interruptThread] shutdown: waiting for fs preallocator... m31000| Thu Jan 17 16:48:19.737 [interruptThread] shutdown: lock for final commit... m31000| Thu Jan 17 16:48:19.737 [interruptThread] shutdown: final commit... m31002| Thu Jan 17 16:48:19.737 [conn28] end connection 127.0.0.2:49246 (3 connections now open) m31001| Thu Jan 17 16:48:19.737 [conn32] end connection 127.0.0.2:60758 (5 connections now open) m31001| Thu Jan 17 16:48:19.737 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000 m31002| Thu Jan 17 16:48:19.737 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: 127.0.0.2:31000 m31000| Thu Jan 17 16:48:19.774 [interruptThread] shutdown: closing all files... m31000| Thu Jan 17 16:48:19.775 [interruptThread] closeAllFiles() finished m31000| Thu Jan 17 16:48:19.775 [interruptThread] journalCleanup... m31000| Thu Jan 17 16:48:19.775 [interruptThread] removeJournalFiles m31000| Thu Jan 17 16:48:19.790 [interruptThread] shutdown: removing fs lock... m31000| Thu Jan 17 16:48:19.791 dbexit: really exiting now m31001| Thu Jan 17 16:48:20.312 [rsHealthPoll] DBClientCursor::init call() failed m31001| Thu Jan 17 16:48:20.312 [rsHealthPoll] replset info 127.0.0.2:31000 heartbeat failed, retrying m31001| Thu Jan 17 16:48:20.312 [rsHealthPoll] replSet info 127.0.0.2:31000 is down (or slow to respond): m31001| Thu Jan 17 16:48:20.312 [rsHealthPoll] replSet member 127.0.0.2:31000 is now in state DOWN m31001| Thu Jan 17 16:48:20.737 got signal 15 (Terminated), will terminate after current cmd ends m31001| Thu Jan 17 16:48:20.737 [interruptThread] now exiting m31001| Thu Jan 17 16:48:20.737 dbexit: m31001| Thu Jan 17 16:48:20.737 [interruptThread] shutdown: going to close listening sockets... m31001| Thu Jan 17 16:48:20.737 [interruptThread] closing listening socket: 16 m31001| Thu Jan 17 16:48:20.737 [interruptThread] closing listening socket: 17 m31001| Thu Jan 17 16:48:20.737 [interruptThread] shutdown: going to flush diaglog... m31001| Thu Jan 17 16:48:20.737 [interruptThread] shutdown: going to close sockets... m31001| Thu Jan 17 16:48:20.737 [interruptThread] shutdown: waiting for fs preallocator... m31001| Thu Jan 17 16:48:20.737 [interruptThread] shutdown: lock for final commit... m31001| Thu Jan 17 16:48:20.737 [interruptThread] shutdown: final commit... m31003| Thu Jan 17 16:48:20.737 [conn19] end connection 127.0.0.3:39383 (2 connections now open) m31002| Thu Jan 17 16:48:20.737 [conn29] end connection 127.0.0.3:44029 (2 connections now open) m31003| Thu Jan 17 16:48:20.737 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: 127.0.0.3:31001 m31001| Thu Jan 17 16:48:20.738 [interruptThread] shutdown: closing all files... m31001| Thu Jan 17 16:48:20.739 [interruptThread] closeAllFiles() finished m31001| Thu Jan 17 16:48:20.739 [interruptThread] journalCleanup... m31001| Thu Jan 17 16:48:20.739 [interruptThread] removeJournalFiles m31001| Thu Jan 17 16:48:20.757 [interruptThread] shutdown: removing fs lock... m31001| Thu Jan 17 16:48:20.757 dbexit: really exiting now m31002| Thu Jan 17 16:48:20.948 [rsHealthPoll] DBClientCursor::init call() failed m31002| Thu Jan 17 16:48:20.948 [rsHealthPoll] DBClientCursor::init call() failed m31002| Thu Jan 17 16:48:20.948 [rsHThu Jan 17 16:48:23.740 [conn2] end connection 127.0.0.1:56639 (0 connections now open) 4.7916 minutes Thu Jan 17 16:48:23.740 got signal 15 (Terminated), will terminate after current cmd ends Thu Jan 17 16:48:23.740 [interruptThread] now exiting Thu Jan 17 16:48:23.740 dbexit: Thu Jan 17 16:48:23.740 [interruptThread] shutdown: going to close listening sockets... Thu Jan 17 16:48:23.741 [interruptThread] closing listening socket: 9 Thu Jan 17 16:48:23.741 [interruptThread] closing listening socket: 10 Thu Jan 17 16:48:23.741 [interruptThread] closing listening socket: 11 Thu Jan 17 16:48:23.741 [interruptThread] removing socket file: /tmp/mongodb-27999.sock Thu Jan 17 16:48:23.741 [interruptThread] shutdown: going to flush diaglog... Thu Jan 17 16:48:23.741 [interruptThread] shutdown: going to close sockets... Thu Jan 17 16:48:23.741 [interruptThread] shutdown: waiting for fs preallocator... Thu Jan 17 16:48:23.741 [interruptThread] shutdown: lock for final commit... Thu Jan 17 16:48:23.741 [interruptThread] shutdown: final commit... Thu Jan 17 16:48:23.742 [interruptThread] shutdown: closing all files... Thu Jan 17 16:48:23.742 [interruptThread] closeAllFiles() finished Thu Jan 17 16:48:23.742 [interruptThread] journalCleanup... Thu Jan 17 16:48:23.743 [interruptThread] removeJournalFiles Thu Jan 17 16:48:23.756 [interruptThread] shutdown: removing fs lock... Thu Jan 17 16:48:23.756 dbexit: really exiting now test /home/gregorv/Workspaces/10Gen/mongo/jstests/replsets/sync_change_source.js exited with status 253 0 tests succeeded The following tests failed (with exit code): /home/gregorv/Workspaces/10Gen/mongo/jstests/replsets/sync_change_source.js 253