specter:QA-424 vkarpov$ ./mongo-26 repl_upgrade_24_secondaries.js MongoDB shell version: 2.5.5-pre- connecting to: test ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31000, 31001, 31002 ] 31000 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31000, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1", "dbpath" : "$set-$node", "binVersion" : "248", "restart" : undefined, "pathOpts" : { "node" : 0, "set" : "rs1" } } ReplSetTest Starting.... Resetting db path '/data/db/rs1-0' 2013-12-18T15:16:12.843-0500 shell: started program /Users/vkarpov/qa/QA/QA-424/mongod-248 --oplogSize 40 --keyFile jstests/libs/key1 --port 31000 --noprealloc --smallfiles --rest --replSet rs1 --dbpath /data/db/rs1-0 2013-12-18T15:16:12.844-0500 warning: Failed to connect to 127.0.0.1:31000, reason: errno:61 Connection refused m31000| note: noprealloc may hurt performance in many applications m31000| Wed Dec 18 15:16:12.874 [initandlisten] MongoDB starting : pid=3462 port=31000 dbpath=/data/db/rs1-0 64-bit host=specter.local m31000| Wed Dec 18 15:16:12.874 [initandlisten] m31000| Wed Dec 18 15:16:12.874 [initandlisten] ** WARNING: soft rlimits too low. Number of files is 256, should be at least 1000 m31000| Wed Dec 18 15:16:12.874 [initandlisten] db version v2.4.8 m31000| Wed Dec 18 15:16:12.874 [initandlisten] git version: a350fc38922fbda2cec8d5dd842237b904eafc14 m31000| Wed Dec 18 15:16:12.874 [initandlisten] build info: Darwin bs-osx-106-x86-64-2.10gen.cc 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:32:41 PDT 2011; root:xnu-1504.15.3~1/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_49 m31000| Wed Dec 18 15:16:12.874 [initandlisten] allocator: system m31000| Wed Dec 18 15:16:12.874 [initandlisten] options: { dbpath: "/data/db/rs1-0", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31000, replSet: "rs1", rest: true, smallfiles: true } m31000| Wed Dec 18 15:16:12.881 [initandlisten] journal dir=/data/db/rs1-0/journal m31000| Wed Dec 18 15:16:12.881 [initandlisten] recover : no journal files present, no recovery needed m31000| Wed Dec 18 15:16:12.893 [FileAllocator] allocating new datafile /data/db/rs1-0/local.ns, filling with zeroes... m31000| Wed Dec 18 15:16:12.893 [FileAllocator] creating directory /data/db/rs1-0/_tmp m31000| Wed Dec 18 15:16:12.920 [FileAllocator] done allocating datafile /data/db/rs1-0/local.ns, size: 16MB, took 0.026 secs m31000| Wed Dec 18 15:16:12.946 [FileAllocator] allocating new datafile /data/db/rs1-0/local.0, filling with zeroes... m31000| Wed Dec 18 15:16:12.969 [FileAllocator] done allocating datafile /data/db/rs1-0/local.0, size: 16MB, took 0.022 secs m31000| Wed Dec 18 15:16:12.997 [initandlisten] command local.$cmd command: { create: "startup_log", size: 10485760, capped: true } ntoreturn:1 keyUpdates:0 reslen:37 104ms m31000| Wed Dec 18 15:16:12.998 [websvr] admin web console waiting for connections on port 32000 m31000| Wed Dec 18 15:16:12.998 [initandlisten] waiting for connections on port 31000 m31000| Wed Dec 18 15:16:12.999 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31000| Wed Dec 18 15:16:12.999 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31000| Wed Dec 18 15:16:13.046 [initandlisten] connection accepted from 127.0.0.1:61247 #1 (1 connection now open) [ connection to specter.local:31000 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31000, 31001, 31002 ] 31001 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31001, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1", "dbpath" : "$set-$node", "binVersion" : "248", "restart" : undefined, "pathOpts" : { "node" : 1, "set" : "rs1" } } ReplSetTest Starting.... Resetting db path '/data/db/rs1-1' 2013-12-18T15:16:13.062-0500 shell: started program /Users/vkarpov/qa/QA/QA-424/mongod-248 --oplogSize 40 --keyFile jstests/libs/key1 --port 31001 --noprealloc --smallfiles --rest --replSet rs1 --dbpath /data/db/rs1-1 2013-12-18T15:16:13.062-0500 warning: Failed to connect to 127.0.0.1:31001, reason: errno:61 Connection refused m31001| note: noprealloc may hurt performance in many applications m31001| Wed Dec 18 15:16:13.096 [initandlisten] MongoDB starting : pid=3463 port=31001 dbpath=/data/db/rs1-1 64-bit host=specter.local m31001| Wed Dec 18 15:16:13.097 [initandlisten] m31001| Wed Dec 18 15:16:13.097 [initandlisten] ** WARNING: soft rlimits too low. Number of files is 256, should be at least 1000 m31001| Wed Dec 18 15:16:13.097 [initandlisten] db version v2.4.8 m31001| Wed Dec 18 15:16:13.097 [initandlisten] git version: a350fc38922fbda2cec8d5dd842237b904eafc14 m31001| Wed Dec 18 15:16:13.097 [initandlisten] build info: Darwin bs-osx-106-x86-64-2.10gen.cc 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:32:41 PDT 2011; root:xnu-1504.15.3~1/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_49 m31001| Wed Dec 18 15:16:13.097 [initandlisten] allocator: system m31001| Wed Dec 18 15:16:13.097 [initandlisten] options: { dbpath: "/data/db/rs1-1", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31001, replSet: "rs1", rest: true, smallfiles: true } m31001| Wed Dec 18 15:16:13.097 [initandlisten] journal dir=/data/db/rs1-1/journal m31001| Wed Dec 18 15:16:13.098 [initandlisten] recover : no journal files present, no recovery needed m31001| Wed Dec 18 15:16:13.114 [FileAllocator] allocating new datafile /data/db/rs1-1/local.ns, filling with zeroes... m31001| Wed Dec 18 15:16:13.114 [FileAllocator] creating directory /data/db/rs1-1/_tmp m31001| Wed Dec 18 15:16:13.142 [FileAllocator] done allocating datafile /data/db/rs1-1/local.ns, size: 16MB, took 0.027 secs m31001| Wed Dec 18 15:16:13.168 [FileAllocator] allocating new datafile /data/db/rs1-1/local.0, filling with zeroes... m31001| Wed Dec 18 15:16:13.192 [FileAllocator] done allocating datafile /data/db/rs1-1/local.0, size: 16MB, took 0.023 secs m31001| Wed Dec 18 15:16:13.218 [initandlisten] command local.$cmd command: { create: "startup_log", size: 10485760, capped: true } ntoreturn:1 keyUpdates:0 reslen:37 104ms m31001| Wed Dec 18 15:16:13.219 [websvr] admin web console waiting for connections on port 32001 m31001| Wed Dec 18 15:16:13.219 [initandlisten] waiting for connections on port 31001 m31001| Wed Dec 18 15:16:13.220 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31001| Wed Dec 18 15:16:13.220 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31001| Wed Dec 18 15:16:13.264 [initandlisten] connection accepted from 127.0.0.1:61249 #1 (1 connection now open) [ connection to specter.local:31000, connection to specter.local:31001 ] ReplSetTest n is : 2 ReplSetTest n: 2 ports: [ 31000, 31001, 31002 ] 31002 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31002, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1", "dbpath" : "$set-$node", "binVersion" : "248", "restart" : undefined, "pathOpts" : { "node" : 2, "set" : "rs1" } } ReplSetTest Starting.... Resetting db path '/data/db/rs1-2' 2013-12-18T15:16:13.280-0500 shell: started program /Users/vkarpov/qa/QA/QA-424/mongod-248 --oplogSize 40 --keyFile jstests/libs/key1 --port 31002 --noprealloc --smallfiles --rest --replSet rs1 --dbpath /data/db/rs1-2 2013-12-18T15:16:13.281-0500 warning: Failed to connect to 127.0.0.1:31002, reason: errno:61 Connection refused m31002| note: noprealloc may hurt performance in many applications m31002| Wed Dec 18 15:16:13.314 [initandlisten] MongoDB starting : pid=3464 port=31002 dbpath=/data/db/rs1-2 64-bit host=specter.local m31002| Wed Dec 18 15:16:13.314 [initandlisten] m31002| Wed Dec 18 15:16:13.314 [initandlisten] ** WARNING: soft rlimits too low. Number of files is 256, should be at least 1000 m31002| Wed Dec 18 15:16:13.314 [initandlisten] db version v2.4.8 m31002| Wed Dec 18 15:16:13.314 [initandlisten] git version: a350fc38922fbda2cec8d5dd842237b904eafc14 m31002| Wed Dec 18 15:16:13.314 [initandlisten] build info: Darwin bs-osx-106-x86-64-2.10gen.cc 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:32:41 PDT 2011; root:xnu-1504.15.3~1/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_49 m31002| Wed Dec 18 15:16:13.314 [initandlisten] allocator: system m31002| Wed Dec 18 15:16:13.314 [initandlisten] options: { dbpath: "/data/db/rs1-2", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31002, replSet: "rs1", rest: true, smallfiles: true } m31002| Wed Dec 18 15:16:13.315 [initandlisten] journal dir=/data/db/rs1-2/journal m31002| Wed Dec 18 15:16:13.315 [initandlisten] recover : no journal files present, no recovery needed m31002| Wed Dec 18 15:16:13.330 [FileAllocator] allocating new datafile /data/db/rs1-2/local.ns, filling with zeroes... m31002| Wed Dec 18 15:16:13.330 [FileAllocator] creating directory /data/db/rs1-2/_tmp m31002| Wed Dec 18 15:16:13.357 [FileAllocator] done allocating datafile /data/db/rs1-2/local.ns, size: 16MB, took 0.025 secs m31002| Wed Dec 18 15:16:13.383 [FileAllocator] allocating new datafile /data/db/rs1-2/local.0, filling with zeroes... m31002| Wed Dec 18 15:16:13.403 [FileAllocator] done allocating datafile /data/db/rs1-2/local.0, size: 16MB, took 0.02 secs m31002| Wed Dec 18 15:16:13.433 [initandlisten] command local.$cmd command: { create: "startup_log", size: 10485760, capped: true } ntoreturn:1 keyUpdates:0 reslen:37 102ms m31002| Wed Dec 18 15:16:13.434 [websvr] admin web console waiting for connections on port 32002 m31002| Wed Dec 18 15:16:13.434 [initandlisten] waiting for connections on port 31002 m31002| Wed Dec 18 15:16:13.435 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31002| Wed Dec 18 15:16:13.435 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31002| Wed Dec 18 15:16:13.482 [initandlisten] connection accepted from 127.0.0.1:61251 #1 (1 connection now open) [ connection to specter.local:31000, connection to specter.local:31001, connection to specter.local:31002 ] { "replSetInitiate" : { "_id" : "rs1", "members" : [ { "_id" : 0, "host" : "specter.local:31000" }, { "_id" : 1, "host" : "specter.local:31001" }, { "_id" : 2, "host" : "specter.local:31002" } ] } } m31000| Wed Dec 18 15:16:13.485 [conn1] note: no users configured in admin.system.users, allowing localhost access m31000| Wed Dec 18 15:16:13.485 [conn1] replSet replSetInitiate admin command received from client m31000| Wed Dec 18 15:16:13.485 [conn1] replSet replSetInitiate config object parses ok, 3 members specified m31001| Wed Dec 18 15:16:13.628 [initandlisten] connection accepted from 10.4.101.171:61252 #2 (2 connections now open) m31001| Wed Dec 18 15:16:13.629 [conn2] authenticate db: local { authenticate: 1, nonce: "3eaddee1c02b90d0", user: "__system", key: "58bdaca0881bda0335316a8f6b1e85d6" } m31002| Wed Dec 18 15:16:13.630 [initandlisten] connection accepted from 10.4.101.171:61253 #2 (2 connections now open) m31002| Wed Dec 18 15:16:13.630 [conn2] authenticate db: local { authenticate: 1, nonce: "f530994937e130db", user: "__system", key: "7885eefa16781d94cdaf35ba18b641d2" } m31000| Wed Dec 18 15:16:13.631 [conn1] replSet replSetInitiate all members seem up m31000| Wed Dec 18 15:16:13.631 [conn1] ****** m31000| Wed Dec 18 15:16:13.631 [conn1] creating replication oplog of size: 40MB... m31000| Wed Dec 18 15:16:13.631 [FileAllocator] allocating new datafile /data/db/rs1-0/local.1, filling with zeroes... m31000| Wed Dec 18 15:16:13.792 [FileAllocator] done allocating datafile /data/db/rs1-0/local.1, size: 64MB, took 0.161 secs m31000| Wed Dec 18 15:16:13.835 [conn1] ****** m31000| Wed Dec 18 15:16:13.835 [conn1] replSet info saving a newer config version to local.system.replset m31000| Wed Dec 18 15:16:13.854 [conn1] replSet saveConfigLocally done m31000| Wed Dec 18 15:16:13.854 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. m31000| Wed Dec 18 15:16:13.854 [conn1] command admin.$cmd command: { replSetInitiate: { _id: "rs1", members: [ { _id: 0.0, host: "specter.local:31000" }, { _id: 1.0, host: "specter.local:31001" }, { _id: 2.0, host: "specter.local:31002" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:223229 reslen:112 368ms { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31001| Wed Dec 18 15:16:13.857 [conn1] note: no users configured in admin.system.users, allowing localhost access m31002| Wed Dec 18 15:16:13.858 [conn1] note: no users configured in admin.system.users, allowing localhost access m31000| Wed Dec 18 15:16:23.000 [rsStart] replSet I am specter.local:31000 m31000| Wed Dec 18 15:16:23.000 [rsStart] replSet STARTUP2 m31001| Wed Dec 18 15:16:23.221 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31002| Wed Dec 18 15:16:23.436 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31000| Wed Dec 18 15:16:24.002 [rsSync] replSet SECONDARY m31000| Wed Dec 18 15:16:25.002 [rsHealthPoll] replSet member specter.local:31002 is up m31000| Wed Dec 18 15:16:25.002 [rsHealthPoll] replSet member specter.local:31001 is up m31000| Wed Dec 18 15:16:25.002 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes m31000| Wed Dec 18 15:16:25.002 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes m31000| Wed Dec 18 15:16:31.005 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes m31001| Wed Dec 18 15:16:33.222 [rsStart] trying to contact specter.local:31000 m31000| Wed Dec 18 15:16:33.223 [initandlisten] connection accepted from 10.4.101.171:61255 #2 (2 connections now open) m31000| Wed Dec 18 15:16:33.223 [conn2] authenticate db: local { authenticate: 1, nonce: "35305bb2cf64e592", user: "__system", key: "40b5096a1011450ab5bce265e27eee91" } m31001| Wed Dec 18 15:16:33.224 [rsStart] replSet I am specter.local:31001 m31001| Wed Dec 18 15:16:33.224 [rsStart] replSet got config version 1 from a remote, saving locally m31001| Wed Dec 18 15:16:33.224 [rsStart] replSet info saving a newer config version to local.system.replset m31001| Wed Dec 18 15:16:33.234 [rsStart] replSet saveConfigLocally done m31001| Wed Dec 18 15:16:33.234 [rsStart] replSet STARTUP2 m31001| Wed Dec 18 15:16:33.235 [rsSync] ****** m31001| Wed Dec 18 15:16:33.235 [rsSync] creating replication oplog of size: 40MB... m31001| Wed Dec 18 15:16:33.235 [FileAllocator] allocating new datafile /data/db/rs1-1/local.1, filling with zeroes... m31001| Wed Dec 18 15:16:33.394 [FileAllocator] done allocating datafile /data/db/rs1-1/local.1, size: 64MB, took 0.158 secs m31001| Wed Dec 18 15:16:33.434 [rsSync] ****** m31001| Wed Dec 18 15:16:33.434 [rsSync] replSet initial sync pending m31001| Wed Dec 18 15:16:33.434 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31002| Wed Dec 18 15:16:33.437 [rsStart] trying to contact specter.local:31000 m31000| Wed Dec 18 15:16:33.437 [initandlisten] connection accepted from 10.4.101.171:61256 #3 (3 connections now open) m31000| Wed Dec 18 15:16:33.438 [conn3] authenticate db: local { authenticate: 1, nonce: "ed93587dc2b59d68", user: "__system", key: "d9d0660c3e73f51cd42dd07bb22c7cb0" } m31002| Wed Dec 18 15:16:33.438 [rsStart] replSet I am specter.local:31002 m31002| Wed Dec 18 15:16:33.439 [rsStart] replSet got config version 1 from a remote, saving locally m31002| Wed Dec 18 15:16:33.439 [rsStart] replSet info saving a newer config version to local.system.replset m31002| Wed Dec 18 15:16:33.443 [rsStart] replSet saveConfigLocally done m31002| Wed Dec 18 15:16:33.443 [rsStart] replSet STARTUP2 m31002| Wed Dec 18 15:16:33.444 [rsSync] ****** m31002| Wed Dec 18 15:16:33.444 [rsSync] creating replication oplog of size: 40MB... m31002| Wed Dec 18 15:16:33.444 [FileAllocator] allocating new datafile /data/db/rs1-2/local.1, filling with zeroes... m31002| Wed Dec 18 15:16:33.605 [FileAllocator] done allocating datafile /data/db/rs1-2/local.1, size: 64MB, took 0.16 secs m31002| Wed Dec 18 15:16:33.645 [rsSync] ****** m31002| Wed Dec 18 15:16:33.645 [rsSync] replSet initial sync pending m31002| Wed Dec 18 15:16:33.645 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31000| Wed Dec 18 15:16:35.007 [rsHealthPoll] replset info specter.local:31002 thinks that we are down m31000| Wed Dec 18 15:16:35.007 [rsHealthPoll] replset info specter.local:31001 thinks that we are down m31000| Wed Dec 18 15:16:35.007 [rsHealthPoll] replSet member specter.local:31002 is now in state STARTUP2 m31000| Wed Dec 18 15:16:35.008 [rsHealthPoll] replSet member specter.local:31001 is now in state STARTUP2 m31000| Wed Dec 18 15:16:35.008 [rsMgr] not electing self, specter.local:31002 would veto with 'I don't think specter.local:31000 is electable' m31000| Wed Dec 18 15:16:35.008 [rsMgr] not electing self, specter.local:31002 would veto with 'I don't think specter.local:31000 is electable' m31001| Wed Dec 18 15:16:35.225 [rsHealthPoll] replSet member specter.local:31000 is up m31001| Wed Dec 18 15:16:35.225 [rsHealthPoll] replSet member specter.local:31000 is now in state SECONDARY m31002| Wed Dec 18 15:16:35.225 [initandlisten] connection accepted from 10.4.101.171:61257 #3 (3 connections now open) m31002| Wed Dec 18 15:16:35.225 [conn3] authenticate db: local { authenticate: 1, nonce: "894fb675297551cf", user: "__system", key: "fced883b51964894b617edee71cdc614" } m31001| Wed Dec 18 15:16:35.226 [rsHealthPoll] replset info specter.local:31002 thinks that we are down m31001| Wed Dec 18 15:16:35.226 [rsHealthPoll] replSet member specter.local:31002 is up m31001| Wed Dec 18 15:16:35.226 [rsHealthPoll] replSet member specter.local:31002 is now in state STARTUP2 m31002| Wed Dec 18 15:16:35.440 [rsHealthPoll] replSet member specter.local:31000 is up m31002| Wed Dec 18 15:16:35.440 [rsHealthPoll] replSet member specter.local:31000 is now in state SECONDARY m31001| Wed Dec 18 15:16:35.440 [initandlisten] connection accepted from 10.4.101.171:61258 #3 (3 connections now open) m31001| Wed Dec 18 15:16:35.440 [conn3] authenticate db: local { authenticate: 1, nonce: "c43dc85ee1929c87", user: "__system", key: "b8cdf0445f2ba096858be79711e0e609" } m31002| Wed Dec 18 15:16:35.441 [rsHealthPoll] replSet member specter.local:31001 is up m31002| Wed Dec 18 15:16:35.441 [rsHealthPoll] replSet member specter.local:31001 is now in state STARTUP2 m31001| Wed Dec 18 15:16:39.009 [conn2] end connection 10.4.101.171:61252 (2 connections now open) m31001| Wed Dec 18 15:16:39.009 [initandlisten] connection accepted from 10.4.101.171:61259 #4 (3 connections now open) m31001| Wed Dec 18 15:16:39.009 [conn4] authenticate db: local { authenticate: 1, nonce: "3bdf07a52ce57cc1", user: "__system", key: "a175a85d76d4230d11f8080e44a13983" } m31000| Wed Dec 18 15:16:41.010 [rsMgr] replSet info electSelf 0 m31002| Wed Dec 18 15:16:41.010 [conn2] replSet RECOVERING m31002| Wed Dec 18 15:16:41.010 [conn2] replSet info voting yea for specter.local:31000 (0) m31001| Wed Dec 18 15:16:41.010 [conn4] replSet RECOVERING m31001| Wed Dec 18 15:16:41.010 [conn4] replSet info voting yea for specter.local:31000 (0) m31000| Wed Dec 18 15:16:41.015 [rsMgr] replSet PRIMARY m31001| Wed Dec 18 15:16:41.229 [rsHealthPoll] replSet member specter.local:31002 is now in state RECOVERING m31001| Wed Dec 18 15:16:41.229 [rsHealthPoll] replSet member specter.local:31000 is now in state PRIMARY m31002| Wed Dec 18 15:16:41.443 [rsHealthPoll] replSet member specter.local:31001 is now in state RECOVERING m31002| Wed Dec 18 15:16:41.443 [rsHealthPoll] replSet member specter.local:31000 is now in state PRIMARY m31000| Wed Dec 18 15:16:43.011 [rsHealthPoll] replSet member specter.local:31002 is now in state RECOVERING m31000| Wed Dec 18 15:16:43.011 [rsHealthPoll] replSet member specter.local:31001 is now in state RECOVERING m31000| Wed Dec 18 15:16:49.234 [conn2] end connection 10.4.101.171:61255 (2 connections now open) m31000| Wed Dec 18 15:16:49.234 [initandlisten] connection accepted from 10.4.101.171:61302 #4 (3 connections now open) m31000| Wed Dec 18 15:16:49.234 [conn4] authenticate db: local { authenticate: 1, nonce: "e4cea3277bbdd18b", user: "__system", key: "119c701857e92e0b9036efe644df454a" } m31001| Wed Dec 18 15:16:49.435 [rsSync] replSet initial sync pending m31001| Wed Dec 18 15:16:49.435 [rsSync] replSet syncing to: specter.local:31000 m31000| Wed Dec 18 15:16:49.436 [initandlisten] connection accepted from 10.4.101.171:61303 #5 (4 connections now open) m31000| Wed Dec 18 15:16:49.436 [conn5] authenticate db: local { authenticate: 1, nonce: "d3848d147289f8da", user: "__system", key: "78b912a12ccf1a51f2202c8907b5c463" } m31001| Wed Dec 18 15:16:49.437 [rsSync] build index local.me { _id: 1 } m31001| Wed Dec 18 15:16:49.438 [rsSync] build index done. scanned 0 total records. 0 secs m31001| Wed Dec 18 15:16:49.439 [rsSync] build index local.replset.minvalid { _id: 1 } m31001| Wed Dec 18 15:16:49.440 [rsSync] build index done. scanned 0 total records. 0 secs m31001| Wed Dec 18 15:16:49.440 [rsSync] replSet initial sync drop all databases m31001| Wed Dec 18 15:16:49.440 [rsSync] dropAllDatabasesExceptLocal 1 m31001| Wed Dec 18 15:16:49.440 [rsSync] replSet initial sync clone all databases m31001| Wed Dec 18 15:16:49.440 [rsSync] replSet initial sync cloning db: admin m31000| Wed Dec 18 15:16:49.441 [initandlisten] connection accepted from 10.4.101.171:61304 #6 (5 connections now open) m31000| Wed Dec 18 15:16:49.441 [conn6] authenticate db: local { authenticate: 1, nonce: "d38b2dfba1afba3f", user: "__system", key: "09f96b05241a4cfecdca96888bd40b62" } m31001| Wed Dec 18 15:16:49.441 [rsSync] replSet initial sync data copy, starting syncup m31001| Wed Dec 18 15:16:49.441 [rsSync] oplog sync 1 of 3 m31001| Wed Dec 18 15:16:49.448 [rsSync] oplog sync 2 of 3 m31001| Wed Dec 18 15:16:49.448 [rsSync] replSet initial sync building indexes m31001| Wed Dec 18 15:16:49.448 [rsSync] replSet initial sync cloning indexes for : admin m31000| Wed Dec 18 15:16:49.448 [conn3] end connection 10.4.101.171:61256 (4 connections now open) m31000| Wed Dec 18 15:16:49.449 [initandlisten] connection accepted from 10.4.101.171:61305 #7 (5 connections now open) m31001| Wed Dec 18 15:16:49.449 [rsSync] oplog sync 3 of 3 m31000| Wed Dec 18 15:16:49.449 [conn6] end connection 10.4.101.171:61304 (4 connections now open) m31000| Wed Dec 18 15:16:49.449 [conn7] authenticate db: local { authenticate: 1, nonce: "396be5bb82a29a66", user: "__system", key: "4ebc37dda223b399bec2f0bd4de0c831" } m31001| Wed Dec 18 15:16:49.449 [rsSync] replSet initial sync finishing up m31001| Wed Dec 18 15:16:49.459 [rsSync] replSet set minValid=52b2028d:1 m31001| Wed Dec 18 15:16:49.467 [rsSync] replSet initial sync done m31000| Wed Dec 18 15:16:49.467 [conn5] end connection 10.4.101.171:61303 (3 connections now open) m31002| Wed Dec 18 15:16:49.646 [rsSync] replSet initial sync pending m31002| Wed Dec 18 15:16:49.646 [rsSync] replSet syncing to: specter.local:31000 m31000| Wed Dec 18 15:16:49.648 [initandlisten] connection accepted from 10.4.101.171:61306 #8 (4 connections now open) m31000| Wed Dec 18 15:16:49.648 [conn8] authenticate db: local { authenticate: 1, nonce: "c9bc25c50f2b7245", user: "__system", key: "ce2f9befcfba361fbfe28751a758894a" } m31002| Wed Dec 18 15:16:49.650 [rsSync] build index local.me { _id: 1 } m31002| Wed Dec 18 15:16:49.652 [rsSync] build index done. scanned 0 total records. 0.001 secs m31002| Wed Dec 18 15:16:49.654 [rsSync] build index local.replset.minvalid { _id: 1 } m31002| Wed Dec 18 15:16:49.655 [rsSync] build index done. scanned 0 total records. 0 secs m31002| Wed Dec 18 15:16:49.655 [rsSync] replSet initial sync drop all databases m31002| Wed Dec 18 15:16:49.655 [rsSync] dropAllDatabasesExceptLocal 1 m31002| Wed Dec 18 15:16:49.655 [rsSync] replSet initial sync clone all databases m31002| Wed Dec 18 15:16:49.655 [rsSync] replSet initial sync cloning db: admin m31000| Wed Dec 18 15:16:49.656 [initandlisten] connection accepted from 10.4.101.171:61307 #9 (5 connections now open) m31000| Wed Dec 18 15:16:49.656 [conn9] authenticate db: local { authenticate: 1, nonce: "b6a05de7e7a5e927", user: "__system", key: "28804cfa04f175830041a1135b290e9e" } m31002| Wed Dec 18 15:16:49.657 [rsSync] replSet initial sync data copy, starting syncup m31002| Wed Dec 18 15:16:49.657 [rsSync] oplog sync 1 of 3 m31002| Wed Dec 18 15:16:49.657 [rsSync] oplog sync 2 of 3 m31002| Wed Dec 18 15:16:49.657 [rsSync] replSet initial sync building indexes m31002| Wed Dec 18 15:16:49.657 [rsSync] replSet initial sync cloning indexes for : admin m31002| Wed Dec 18 15:16:49.657 [rsSync] oplog sync 3 of 3 m31000| Wed Dec 18 15:16:49.657 [conn9] end connection 10.4.101.171:61307 (4 connections now open) m31002| Wed Dec 18 15:16:49.657 [rsSync] replSet initial sync finishing up m31002| Wed Dec 18 15:16:49.666 [rsSync] replSet set minValid=52b2028d:1 m31002| Wed Dec 18 15:16:49.685 [rsSync] replSet initial sync done m31000| Wed Dec 18 15:16:49.686 [conn8] end connection 10.4.101.171:61306 (3 connections now open) m31001| Wed Dec 18 15:16:50.247 [rsBackgroundSync] replSet syncing to: specter.local:31000 m31000| Wed Dec 18 15:16:50.247 [initandlisten] connection accepted from 10.4.101.171:61308 #10 (4 connections now open) m31000| Wed Dec 18 15:16:50.248 [conn10] authenticate db: local { authenticate: 1, nonce: "e1f489afb95f8fee", user: "__system", key: "b46563a2ba9c48ced447e977c2d63c1d" } m31002| Wed Dec 18 15:16:50.456 [rsBackgroundSync] replSet syncing to: specter.local:31000 m31000| Wed Dec 18 15:16:50.458 [initandlisten] connection accepted from 10.4.101.171:61309 #11 (5 connections now open) m31000| Wed Dec 18 15:16:50.459 [conn11] authenticate db: local { authenticate: 1, nonce: "8629f67edc0218b1", user: "__system", key: "23bcee481e68547cbb796af6a7473d58" } m31001| Wed Dec 18 15:16:50.468 [rsSyncNotifier] replset setting oplog notifier to specter.local:31000 m31000| Wed Dec 18 15:16:50.468 [initandlisten] connection accepted from 10.4.101.171:61310 #12 (6 connections now open) m31000| Wed Dec 18 15:16:50.469 [conn12] authenticate db: local { authenticate: 1, nonce: "855e274f8021f9a0", user: "__system", key: "23528a1f0ace7b8d7a053d7e4b09875f" } m31002| Wed Dec 18 15:16:50.697 [rsSyncNotifier] replset setting oplog notifier to specter.local:31000 m31000| Wed Dec 18 15:16:50.699 [initandlisten] connection accepted from 10.4.101.171:61311 #13 (7 connections now open) m31000| Wed Dec 18 15:16:50.699 [conn13] authenticate db: local { authenticate: 1, nonce: "b498c1167e596015", user: "__system", key: "c6c70ee2aa4fe5bb49dfb86ff7d307ea" } m31001| Wed Dec 18 15:16:51.471 [rsSync] replSet SECONDARY m31000| Wed Dec 18 15:16:51.473 [slaveTracking] build index local.slaves { _id: 1 } m31000| Wed Dec 18 15:16:51.474 [slaveTracking] build index done. scanned 0 total records. 0 secs m31002| Wed Dec 18 15:16:51.686 [rsSync] replSet SECONDARY WARNING: The 'addUser' shell helper is DEPRECATED. Please use 'createUser' instead m31000| Wed Dec 18 15:16:51.705 [FileAllocator] allocating new datafile /data/db/rs1-0/admin.ns, filling with zeroes... Successfully added user: { "user" : "admin", "roles" : [ "userAdminAnyDatabase", "readWriteAnyDatabase", "dbAdminAnyDatabase", "clusterAdmin" ], "_id" : ObjectId("52b202b316b0fbef3aedc023") } m31000| Wed Dec 18 15:16:51.728 [FileAllocator] done allocating datafile /data/db/rs1-0/admin.ns, size: 16MB, took 0.022 secs m31000| Wed Dec 18 15:16:51.754 [FileAllocator] allocating new datafile /data/db/rs1-0/admin.0, filling with zeroes... m31000| Wed Dec 18 15:16:51.775 [FileAllocator] done allocating datafile /data/db/rs1-0/admin.0, size: 16MB, took 0.021 secs m31000| Wed Dec 18 15:16:51.802 [conn1] build index admin.system.users { _id: 1 } m31000| Wed Dec 18 15:16:51.802 [conn1] build index done. scanned 0 total records. 0 secs m31000| Wed Dec 18 15:16:51.803 [conn1] build index admin.system.users { user: 1, userSource: 1 } m31000| Wed Dec 18 15:16:51.803 [conn1] build index done. scanned 0 total records. 0 secs m31001| Wed Dec 18 15:16:51.804 [FileAllocator] allocating new datafile /data/db/rs1-1/admin.ns, filling with zeroes... m31002| Wed Dec 18 15:16:51.804 [FileAllocator] allocating new datafile /data/db/rs1-2/admin.ns, filling with zeroes... m31001| Wed Dec 18 15:16:51.835 [FileAllocator] done allocating datafile /data/db/rs1-1/admin.ns, size: 16MB, took 0.03 secs m31002| Wed Dec 18 15:16:51.856 [FileAllocator] done allocating datafile /data/db/rs1-2/admin.ns, size: 16MB, took 0.051 secs m31001| Wed Dec 18 15:16:51.885 [FileAllocator] allocating new datafile /data/db/rs1-1/admin.0, filling with zeroes... m31002| Wed Dec 18 15:16:51.895 [FileAllocator] allocating new datafile /data/db/rs1-2/admin.0, filling with zeroes... m31001| Wed Dec 18 15:16:51.909 [FileAllocator] done allocating datafile /data/db/rs1-1/admin.0, size: 16MB, took 0.023 secs m31002| Wed Dec 18 15:16:51.946 [FileAllocator] done allocating datafile /data/db/rs1-2/admin.0, size: 16MB, took 0.05 secs m31001| Wed Dec 18 15:16:51.964 [repl writer worker 1] build index admin.system.users { _id: 1 } m31001| Wed Dec 18 15:16:51.965 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31001| Wed Dec 18 15:16:51.965 [repl writer worker 1] build index admin.system.users { user: 1, userSource: 1 } m31001| Wed Dec 18 15:16:51.966 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31000| Wed Dec 18 15:16:51.986 [conn1] command admin.$cmd command: { getlasterror: 1.0, w: "majority", wtimeout: 30000.0 } ntoreturn:1 keyUpdates:0 reslen:204 182ms m31002| Wed Dec 18 15:16:51.987 [repl writer worker 1] build index admin.system.users { _id: 1 } m31002| Wed Dec 18 15:16:51.988 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31002| Wed Dec 18 15:16:51.988 [repl writer worker 1] build index admin.system.users { user: 1, userSource: 1 } m31002| Wed Dec 18 15:16:51.992 [repl writer worker 1] build index done. scanned 0 total records. 0.003 secs m31000| Wed Dec 18 15:16:51.994 [conn1] authenticate db: admin { authenticate: 1, nonce: "fc7045dcdcc4863c", user: "admin", key: "7a7abc0a9dbc8092177fd70e59b74861" } WARNING: The 'addUser' shell helper is DEPRECATED. Please use 'createUser' instead Successfully added user: { "user" : "passwordIsTaco", "roles" : [ "readWrite" ], "_id" : ObjectId("52b202b416b0fbef3aedc024") } m31000| Wed Dec 18 15:16:52.002 [FileAllocator] allocating new datafile /data/db/rs1-0/test.ns, filling with zeroes... m31000| Wed Dec 18 15:16:52.022 [FileAllocator] done allocating datafile /data/db/rs1-0/test.ns, size: 16MB, took 0.02 secs m31000| Wed Dec 18 15:16:52.048 [FileAllocator] allocating new datafile /data/db/rs1-0/test.0, filling with zeroes... m31000| Wed Dec 18 15:16:52.069 [FileAllocator] done allocating datafile /data/db/rs1-0/test.0, size: 16MB, took 0.02 secs m31000| Wed Dec 18 15:16:52.097 [conn1] build index test.system.users { _id: 1 } m31000| Wed Dec 18 15:16:52.098 [conn1] build index done. scanned 0 total records. 0.001 secs m31000| Wed Dec 18 15:16:52.098 [conn1] build index test.system.users { user: 1, userSource: 1 } m31000| Wed Dec 18 15:16:52.098 [conn1] build index done. scanned 0 total records. 0 secs m31001| Wed Dec 18 15:16:52.100 [FileAllocator] allocating new datafile /data/db/rs1-1/test.ns, filling with zeroes... m31002| Wed Dec 18 15:16:52.100 [FileAllocator] allocating new datafile /data/db/rs1-2/test.ns, filling with zeroes... m31002| Wed Dec 18 15:16:52.135 [FileAllocator] done allocating datafile /data/db/rs1-2/test.ns, size: 16MB, took 0.035 secs m31001| Wed Dec 18 15:16:52.137 [FileAllocator] done allocating datafile /data/db/rs1-1/test.ns, size: 16MB, took 0.037 secs m31002| Wed Dec 18 15:16:52.179 [FileAllocator] allocating new datafile /data/db/rs1-2/test.0, filling with zeroes... m31001| Wed Dec 18 15:16:52.187 [FileAllocator] allocating new datafile /data/db/rs1-1/test.0, filling with zeroes... m31002| Wed Dec 18 15:16:52.202 [FileAllocator] done allocating datafile /data/db/rs1-2/test.0, size: 16MB, took 0.022 secs m31001| Wed Dec 18 15:16:52.230 [FileAllocator] done allocating datafile /data/db/rs1-1/test.0, size: 16MB, took 0.042 secs m31002| Wed Dec 18 15:16:52.252 [repl writer worker 1] build index test.system.users { _id: 1 } m31002| Wed Dec 18 15:16:52.253 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31002| Wed Dec 18 15:16:52.253 [repl writer worker 1] build index test.system.users { user: 1, userSource: 1 } m31002| Wed Dec 18 15:16:52.254 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31000| Wed Dec 18 15:16:52.263 [conn1] command test.$cmd command: { getlasterror: 1.0, w: "majority", wtimeout: 30000.0 } ntoreturn:1 keyUpdates:0 reslen:204 164ms m31001| Wed Dec 18 15:16:52.267 [repl writer worker 1] build index test.system.users { _id: 1 } m31001| Wed Dec 18 15:16:52.268 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31001| Wed Dec 18 15:16:52.268 [repl writer worker 1] build index test.system.users { user: 1, userSource: 1 } m31001| Wed Dec 18 15:16:52.269 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31002| Wed Dec 18 15:16:53.017 [conn2] end connection 10.4.101.171:61253 (2 connections now open) m31000| Wed Dec 18 15:16:53.017 [rsHealthPoll] replSet member specter.local:31001 is now in state SECONDARY m31002| Wed Dec 18 15:16:53.018 [initandlisten] connection accepted from 10.4.101.171:61312 #4 (3 connections now open) m31002| Wed Dec 18 15:16:53.018 [conn4] authenticate db: local { authenticate: 1, nonce: "5762f4b896e2fc5a", user: "__system", key: "d765dd327d76897fba1b3d9470f86d7d" } m31000| Wed Dec 18 15:16:53.019 [rsHealthPoll] replSet member specter.local:31002 is now in state SECONDARY m31001| Wed Dec 18 15:16:53.236 [rsHealthPoll] replSet member specter.local:31002 is now in state SECONDARY m31002| Wed Dec 18 15:16:53.450 [rsHealthPoll] replSet member specter.local:31001 is now in state SECONDARY -------- Upgrading set to 26 -------- Picking secondary 1 m31001| Wed Dec 18 15:16:57.266 [conn1] authenticate db: admin { authenticate: 1, nonce: "19546c00759e3ae5", user: "admin", key: "8f1d47b5a86de21c82906d42b7b59660" } m31000| Wed Dec 18 15:16:57.268 [conn1] replSet replSetReconfig config object parses ok, 3 members specified ReplSetTest n: 1 ports: [ 31000, 31001, 31002 ] 31001 number ReplSetTest stop *** Shutting down mongod in port 31001 *** m31001| Wed Dec 18 15:16:58.270 [signalProcessingThread] got signal 15 (Terminated: 15), will terminate after current cmd ends m31001| Wed Dec 18 15:16:58.270 [signalProcessingThread] now exiting m31001| Wed Dec 18 15:16:58.270 dbexit: m31001| Wed Dec 18 15:16:58.270 [signalProcessingThread] shutdown: going to close listening sockets... m31001| Wed Dec 18 15:16:58.270 [signalProcessingThread] closing listening socket: 16 m31001| Wed Dec 18 15:16:58.270 [signalProcessingThread] closing listening socket: 17 m31001| Wed Dec 18 15:16:58.270 [signalProcessingThread] closing listening socket: 18 m31001| Wed Dec 18 15:16:58.270 [signalProcessingThread] removing socket file: /tmp/mongodb-31001.sock m31001| Wed Dec 18 15:16:58.270 [signalProcessingThread] shutdown: going to flush diaglog... m31001| Wed Dec 18 15:16:58.270 [signalProcessingThread] shutdown: going to close sockets... m31001| Wed Dec 18 15:16:58.270 [signalProcessingThread] shutdown: waiting for fs preallocator... m31001| Wed Dec 18 15:16:58.270 [signalProcessingThread] shutdown: lock for final commit... m31001| Wed Dec 18 15:16:58.270 [signalProcessingThread] shutdown: final commit... m31001| Wed Dec 18 15:16:58.270 [conn4] end connection 10.4.101.171:61259 (2 connections now open) m31001| Wed Dec 18 15:16:58.270 [rsBackgroundSync] Socket recv() errno:9 Bad file descriptor 10.4.101.171:31000 m31001| Wed Dec 18 15:16:58.270 [rsBackgroundSync] SocketException: remote: 10.4.101.171:31000 error: 9001 socket exception [RECV_ERROR] server [10.4.101.171:31000] m31001| Wed Dec 18 15:16:58.270 [conn1] end connection 127.0.0.1:61249 (2 connections now open) m31002| Wed Dec 18 15:16:58.270 [conn3] end connection 10.4.101.171:61257 (2 connections now open) m31000| Wed Dec 18 15:16:58.270 [conn4] end connection 10.4.101.171:61302 (6 connections now open) m31001| Wed Dec 18 15:16:58.271 [conn3] end connection 10.4.101.171:61258 (2 connections now open) m31000| Wed Dec 18 15:16:58.271 [conn12] end connection 10.4.101.171:61310 (5 connections now open) m31001| Wed Dec 18 15:16:58.271 [rsBackgroundSync] replSet sync source problem: 10278 dbclient error communicating with server: specter.local:31000 m31001| Wed Dec 18 15:16:58.289 [signalProcessingThread] shutdown: closing all files... m31001| Wed Dec 18 15:16:58.289 [signalProcessingThread] closeAllFiles() finished m31001| Wed Dec 18 15:16:58.289 [signalProcessingThread] journalCleanup... m31001| Wed Dec 18 15:16:58.289 [signalProcessingThread] removeJournalFiles m31001| Wed Dec 18 15:16:58.290 [signalProcessingThread] shutdown: removing fs lock... m31001| Wed Dec 18 15:16:58.290 dbexit: really exiting now m31000| Wed Dec 18 15:16:59.020 [rsHealthPoll] DBClientCursor::init call() failed m31000| Wed Dec 18 15:16:59.020 [rsHealthPoll] replset info specter.local:31001 heartbeat failed, retrying m31000| Wed Dec 18 15:16:59.021 [rsHealthPoll] replSet info specter.local:31001 is down (or slow to respond): m31000| Wed Dec 18 15:16:59.021 [rsHealthPoll] replSet member specter.local:31001 is now in state DOWN 2013-12-18T15:16:59.271-0500 shell: stopped mongo program on port 31001 ReplSetTest n is : connection to specter.local:31001 ReplSetTest n: 1 ports: [ 31000, 31001, 31002 ] 31001 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31001, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1", "dbpath" : "$set-$node", "binVersion" : "26", "clusterAuthMode" : "keyFile", "restart" : true, "pathOpts" : { "node" : 1, "set" : "rs1" } } ReplSetTest (Re)Starting.... 2013-12-18T15:16:59.273-0500 shell: started program /Users/vkarpov/qa/QA/QA-424/mongod-26 --oplogSize 40 --keyFile jstests/libs/key1 --port 31001 --noprealloc --smallfiles --rest --replSet rs1 --dbpath /data/db/rs1-1 --clusterAuthMode keyFile 2013-12-18T15:16:59.273-0500 warning: Failed to connect to 127.0.0.1:31001, reason: errno:61 Connection refused m31001| 2013-12-18T15:16:59.297-0500 ** WARNING: --rest is specified without --httpinterface, m31001| 2013-12-18T15:16:59.297-0500 ** enabling http interface m31001| note: noprealloc may hurt performance in many applications m31001| 2013-12-18T15:16:59.302-0500 [initandlisten] MongoDB starting : pid=3467 port=31001 dbpath=/data/db/rs1-1 64-bit host=specter.local m31001| 2013-12-18T15:16:59.302-0500 [initandlisten] m31001| 2013-12-18T15:16:59.302-0500 [initandlisten] ** NOTE: This is a development version (2.5.5-pre-) of MongoDB. m31001| 2013-12-18T15:16:59.302-0500 [initandlisten] ** Not recommended for production. m31001| 2013-12-18T15:16:59.302-0500 [initandlisten] m31001| 2013-12-18T15:16:59.302-0500 [initandlisten] ** WARNING: soft rlimits too low. Number of files is 256, should be at least 1000 m31001| 2013-12-18T15:16:59.302-0500 [initandlisten] m31001| 2013-12-18T15:16:59.302-0500 [initandlisten] db version v2.5.5-pre- m31001| 2013-12-18T15:16:59.302-0500 [initandlisten] git version: bf44f7690aadf1f99e7979adf6c33d4dea2f5464 m31001| 2013-12-18T15:16:59.302-0500 [initandlisten] build info: Darwin mci-osx108-7.build.10gen.cc 12.5.0 Darwin Kernel Version 12.5.0: Sun Sep 29 13:33:47 PDT 2013; root:xnu-2050.48.12~1/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_49 m31001| 2013-12-18T15:16:59.302-0500 [initandlisten] allocator: system m31001| 2013-12-18T15:16:59.302-0500 [initandlisten] options: { clusterAuthMode: "keyFile", dbpath: "/data/db/rs1-1", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31001, replSet: "rs1", rest: true, smallfiles: true } m31001| 2013-12-18T15:16:59.309-0500 [initandlisten] journal dir=/data/db/rs1-1/journal m31001| 2013-12-18T15:16:59.309-0500 [initandlisten] recover : no journal files present, no recovery needed m31001| 2013-12-18T15:16:59.386-0500 [initandlisten] waiting for connections on port 31001 m31001| 2013-12-18T15:16:59.386-0500 [websvr] admin web console waiting for connections on port 32001 m31001| 2013-12-18T15:16:59.388-0500 [rsStart] replSet I am specter.local:31001 m31001| 2013-12-18T15:16:59.409-0500 [rsStart] replSet STARTUP2 m31001| 2013-12-18T15:16:59.410-0500 [rsSync] replSet SECONDARY m31002| Wed Dec 18 15:16:59.453 [rsHealthPoll] DBClientCursor::init call() failed m31002| Wed Dec 18 15:16:59.453 [rsHealthPoll] replset info specter.local:31001 heartbeat failed, retrying m31001| 2013-12-18T15:16:59.453-0500 [initandlisten] connection accepted from 10.4.101.171:61317 #1 (1 connection now open) m31001| 2013-12-18T15:16:59.454-0500 [conn1] authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" } m31002| Wed Dec 18 15:16:59.454 [rsHealthPoll] replset info specter.local:31001 thinks that we are down m31001| 2013-12-18T15:16:59.475-0500 [initandlisten] connection accepted from 127.0.0.1:61318 #2 (2 connections now open) [ connection to specter.local:31000, connection to specter.local:31001, connection to specter.local:31002 ] m31001| 2013-12-18T15:16:59.477-0500 [conn2] authenticate db: admin { authenticate: 1, nonce: "xxx", user: "admin", key: "xxx" } ReplSetTest waitForIndicator state on connection to specter.local:31001 [ 1, 2, 7 ] ReplSetTest waitForIndicator from node connection to specter.local:31001 ReplSetTest waitForIndicator Initial status ( timeout : 30000 ) : { "set" : "rs1", "date" : ISODate("2013-12-18T20:16:59Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "specter.local:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 47, "optime" : Timestamp(1387397812, 1), "optimeDate" : ISODate("2013-12-18T20:16:52Z"), "self" : true }, { "_id" : 1, "name" : "specter.local:31001", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : Timestamp(1387397812, 1), "optimeDate" : ISODate("2013-12-18T20:16:52Z"), "lastHeartbeat" : ISODate("2013-12-18T20:16:59Z"), "lastHeartbeatRecv" : ISODate("2013-12-18T20:16:57Z"), "pingMs" : 0, "syncingTo" : "specter.local:31000" }, { "_id" : 2, "name" : "specter.local:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 34, "optime" : Timestamp(1387397812, 1), "optimeDate" : ISODate("2013-12-18T20:16:52Z"), "lastHeartbeat" : ISODate("2013-12-18T20:16:59Z"), "lastHeartbeatRecv" : ISODate("2013-12-18T20:16:59Z"), "pingMs" : 0, "syncingTo" : "specter.local:31000" } ], "ok" : 1 } Status for : specter.local:31000, checking specter.local:31001/specter.local:31001 Status for : specter.local:31001, checking specter.local:31001/specter.local:31001 Status : 8 target state : 1 Status : 8 target state : 2 Status : 8 target state : 7 Status for : specter.local:31002, checking specter.local:31001/specter.local:31001 m31001| 2013-12-18T15:17:01.021-0500 [initandlisten] connection accepted from 10.4.101.171:61319 #3 (3 connections now open) m31001| 2013-12-18T15:17:01.022-0500 [conn3] authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" } m31000| Wed Dec 18 15:17:01.022 [rsHealthPoll] replset info specter.local:31001 thinks that we are down m31000| Wed Dec 18 15:17:01.022 [rsHealthPoll] replSet member specter.local:31001 is up m31000| Wed Dec 18 15:17:01.022 [rsHealthPoll] replSet member specter.local:31001 is now in state SECONDARY ReplSetTest waitForIndicator final status: { "set" : "rs1", "date" : ISODate("2013-12-18T20:17:01Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "specter.local:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 49, "optime" : Timestamp(1387397812, 1), "optimeDate" : ISODate("2013-12-18T20:16:52Z"), "self" : true }, { "_id" : 1, "name" : "specter.local:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 0, "optime" : Timestamp(1387397812, 1), "optimeDate" : ISODate("2013-12-18T20:16:52Z"), "lastHeartbeat" : ISODate("2013-12-18T20:17:01Z"), "lastHeartbeatRecv" : ISODate("2013-12-18T20:16:57Z"), "pingMs" : 1, "syncingTo" : "specter.local:31000" }, { "_id" : 2, "name" : "specter.local:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 36, "optime" : Timestamp(1387397812, 1), "optimeDate" : ISODate("2013-12-18T20:16:52Z"), "lastHeartbeat" : ISODate("2013-12-18T20:17:01Z"), "lastHeartbeatRecv" : ISODate("2013-12-18T20:16:59Z"), "pingMs" : 0, "syncingTo" : "specter.local:31000" } ], "ok" : 1 } -------- Upgrade done m31002| Wed Dec 18 15:17:01.390 [initandlisten] connection accepted from 10.4.101.171:61320 #5 (3 connections now open) m31000| Wed Dec 18 15:17:01.390 [initandlisten] connection accepted from 10.4.101.171:61321 #14 (6 connections now open) m31000| Wed Dec 18 15:17:01.390 [conn14] authenticate db: local { authenticate: 1, nonce: "ac2983012946cf60", user: "__system", key: "3726c720fb6f9b33e660d6027841f1e6" } m31002| Wed Dec 18 15:17:01.390 [conn5] authenticate db: local { authenticate: 1, nonce: "2805ffab9f695160", user: "__system", key: "de9ec95a9744a2827b7d2745c2b1ee14" } m31001| 2013-12-18T15:17:01.390-0500 [rsHealthPoll] replSet member specter.local:31000 is up m31001| 2013-12-18T15:17:01.390-0500 [rsHealthPoll] replSet member specter.local:31002 is up m31001| 2013-12-18T15:17:01.390-0500 [rsHealthPoll] replSet member specter.local:31000 is now in state PRIMARY m31001| 2013-12-18T15:17:01.391-0500 [rsHealthPoll] replSet member specter.local:31002 is now in state SECONDARY m31000| Wed Dec 18 15:17:02.132 [conn10] end connection 10.4.101.171:61308 (5 connections now open) m31001| 2013-12-18T15:17:03.456-0500 [conn1] end connection 10.4.101.171:61317 (2 connections now open) m31001| 2013-12-18T15:17:03.456-0500 [initandlisten] connection accepted from 10.4.101.171:61322 #4 (3 connections now open) m31001| 2013-12-18T15:17:03.457-0500 [conn4] authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" } m31001| 2013-12-18T15:17:05.414-0500 [rsBackgroundSync] replSet syncing to: specter.local:31000 m31000| Wed Dec 18 15:17:05.414 [initandlisten] connection accepted from 10.4.101.171:61323 #15 (6 connections now open) m31000| Wed Dec 18 15:17:05.415 [conn15] authenticate db: local { authenticate: 1, nonce: "fa7a368139ae2ba7", user: "__system", key: "2309259e1a53da912a75f82c214540b3" } m31000| Wed Dec 18 15:17:05.415 [initandlisten] connection accepted from 10.4.101.171:61324 #16 (7 connections now open) m31000| Wed Dec 18 15:17:05.416 [conn16] authenticate db: local { authenticate: 1, nonce: "21042e687c343001", user: "__system", key: "d759262125f399582b07d4069c484608" } m31001| 2013-12-18T15:17:05.416-0500 [rsBackgroundSync] upstream updater is unsupported on this version m31000| Wed Dec 18 15:17:05.416 [conn16] end connection 10.4.101.171:61324 (6 connections now open) m31000| Wed Dec 18 15:17:05.416 [initandlisten] connection accepted from 10.4.101.171:61325 #17 (7 connections now open) m31000| Wed Dec 18 15:17:05.416 [conn17] authenticate db: local { authenticate: 1, nonce: "f3300a30c74905e8", user: "__system", key: "e3149d2c30d3613e0835888c4bba9d49" } m31001| 2013-12-18T15:17:05.419-0500 [rsSyncNotifier] replset setting oplog notifier to specter.local:31000 m31000| Wed Dec 18 15:17:05.419 [conn17] end connection 10.4.101.171:61325 (6 connections now open) m31000| Wed Dec 18 15:17:05.420 [initandlisten] connection accepted from 10.4.101.171:61326 #18 (7 connections now open) m31000| Wed Dec 18 15:17:05.420 [conn18] authenticate db: local { authenticate: 1, nonce: "1121b1e4dcce4274", user: "__system", key: "f75991996eac5bed05d8ed20db0ca796" } m31001| 2013-12-18T15:17:05.420-0500 [rsSyncNotifier] upstream updater is unsupported on this version m31000| Wed Dec 18 15:17:05.420 [conn18] end connection 10.4.101.171:61326 (6 connections now open) m31000| Wed Dec 18 15:17:05.421 [initandlisten] connection accepted from 10.4.101.171:61327 #19 (7 connections now open) m31000| Wed Dec 18 15:17:05.421 [conn19] authenticate db: local { authenticate: 1, nonce: "5fa320c7d796c9ee", user: "__system", key: "68f2d3ca6a6d04b844448298e5c326b1" } m31001| 2013-12-18T15:17:09.027-0500 [conn3] end connection 10.4.101.171:61319 (2 connections now open) m31001| 2013-12-18T15:17:09.028-0500 [initandlisten] connection accepted from 10.4.101.171:61328 #5 (3 connections now open) m31001| 2013-12-18T15:17:09.028-0500 [conn5] authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" } m31000| Wed Dec 18 15:17:11.105 [conn1] replSet info stepping down as primary secs=50 m31000| Wed Dec 18 15:17:11.105 [conn1] replSet relinquishing primary state m31000| Wed Dec 18 15:17:11.105 [conn1] replSet SECONDARY m31000| Wed Dec 18 15:17:11.105 [conn1] replSet closing client sockets after relinquishing primary m31000| Wed Dec 18 15:17:11.105 [conn13] end connection 10.4.101.171:61311 (6 connections now open) m31000| Wed Dec 18 15:17:11.105 [conn19] end connection 10.4.101.171:61327 (6 connections now open) m31000| Wed Dec 18 15:17:11.105 [conn1] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [127.0.0.1:61247] 2013-12-18T15:17:11.106-0500 DBClientCursor::init call() failed m31001| 2013-12-18T15:17:11.106-0500 [rsBackgroundSync] replSet sync source problem: 10278 dbclient error communicating with server: specter.local:31000 m31001| 2013-12-18T15:17:11.106-0500 [rsBackgroundSync] replSet syncing to: specter.local:31000 m31002| Wed Dec 18 15:17:11.106 [rsBackgroundSync] replSet sync source problem: 10278 dbclient error communicating with server: specter.local:31000 m31002| Wed Dec 18 15:17:11.106 [rsBackgroundSync] replSet syncing to: specter.local:31000 Error: error doing query: failed m31000| Wed Dec 18 15:17:11.107 [initandlisten] connection accepted from 10.4.101.171:61329 #20 (5 connections now open) m31000| Wed Dec 18 15:17:11.107 [initandlisten] connection accepted from 10.4.101.171:61330 #21 (6 connections now open) m31000| Wed Dec 18 15:17:11.107 [conn20] authenticate db: local { authenticate: 1, nonce: "7c71ce835b788ddb", user: "__system", key: "c307de6962992ccc64cb1b8dd67a0ee2" } m31000| Wed Dec 18 15:17:11.107 [initandlisten] connection accepted from 10.4.101.171:61331 #22 (7 connections now open) m31000| Wed Dec 18 15:17:11.107 [conn21] authenticate db: local { authenticate: 1, nonce: "9ce0627bbf69ec5", user: "__system", key: "2260371835ebd88ce92a543a7dd7cba2" } m31000| Wed Dec 18 15:17:11.108 [initandlisten] connection accepted from 10.4.101.171:61332 #23 (8 connections now open) m31000| Wed Dec 18 15:17:11.108 [conn23] authenticate db: local { authenticate: 1, nonce: "71769e892f956ca4", user: "__system", key: "678b020d730495d024b24214473c3617" } m31001| 2013-12-18T15:17:11.108-0500 [rsBackgroundSync] upstream updater is unsupported on this version m31000| Wed Dec 18 15:17:11.108 [conn23] end connection 10.4.101.171:61332 (7 connections now open) m31000| Wed Dec 18 15:17:11.109 [initandlisten] connection accepted from 10.4.101.171:61333 #24 (8 connections now open) m31000| Wed Dec 18 15:17:11.109 [conn24] authenticate db: local { authenticate: 1, nonce: "65346e3fd02462e9", user: "__system", key: "290c2cc516bd9d43b7b5aa513a010b10" } m31001| 2013-12-18T15:17:11.396-0500 [rsHealthPoll] replSet member specter.local:31000 is now in state SECONDARY m31001| 2013-12-18T15:17:11.396-0500 [rsMgr] not electing self, specter.local:31002 would veto with 'specter.local:31001 is trying to elect itself but specter.local:31000 is already primary and more up-to-date' m31002| Wed Dec 18 15:17:11.459 [rsHealthPoll] replSet member specter.local:31000 is now in state SECONDARY m31002| Wed Dec 18 15:17:11.986 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31000| Wed Dec 18 15:17:12.165 [conn11] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [10.4.101.171:61309] m31000| Wed Dec 18 15:17:15.398 [conn14] end connection 10.4.101.171:61321 (6 connections now open) m31000| Wed Dec 18 15:17:15.398 [initandlisten] connection accepted from 10.4.101.171:61337 #25 (7 connections now open) m31000| Wed Dec 18 15:17:15.398 [conn25] authenticate db: local { authenticate: 1, nonce: "b1a67be5340783f7", user: "__system", key: "bda8763a7d08e4ced5f221ae00028b22" } m31000| Wed Dec 18 15:17:15.458 [conn15] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [10.4.101.171:61323] m31001| 2013-12-18T15:17:17.557-0500 [rsMgr] replSet info electSelf 1 m31002| Wed Dec 18 15:17:17.557 [conn5] replSet info voting yea for specter.local:31001 (1) m31000| Wed Dec 18 15:17:17.557 [conn25] replSet info voting yea for specter.local:31001 (1) m31002| Wed Dec 18 15:17:18.235 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31000| Wed Dec 18 15:17:19.464 [conn7] end connection 10.4.101.171:61305 (5 connections now open) m31000| Wed Dec 18 15:17:19.465 [initandlisten] connection accepted from 10.4.101.171:61338 #26 (6 connections now open) m31000| Wed Dec 18 15:17:19.465 [conn26] authenticate db: local { authenticate: 1, nonce: "9bccbd9cdf6792ee", user: "__system", key: "1fd56d5783938d3cd123546ea19f2361" } m31000| Wed Dec 18 15:17:21.149 [conn20] end connection 10.4.101.171:61329 (5 connections now open) m31001| 2013-12-18T15:17:21.149-0500 [rsMgr] replSet PRIMARY ------- Trying to run authSchemaUpgradeStep... m31001| 2013-12-18T15:17:21.312-0500 [conn2] build index on: admin.system.version properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" } m31001| 2013-12-18T15:17:21.312-0500 [conn2] build index done. scanned 0 total records. 0 secs m31001| 2013-12-18T15:17:21.313-0500 [conn2] Auth schema upgrade erasing contents of admin.system.backup_users m31001| 2013-12-18T15:17:21.313-0500 [conn2] Auth schema upgrade backing up admin.system.users into admin.system.backup_users m31001| 2013-12-18T15:17:21.313-0500 [conn2] build index on: admin.system.backup_users properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "admin.system.backup_users" } m31001| 2013-12-18T15:17:21.313-0500 [conn2] build index done. scanned 0 total records. 0 secs m31001| 2013-12-18T15:17:21.313-0500 [conn2] Auth schema upgrade dropping indexes from admin.system.new_users m31001| 2013-12-18T15:17:21.313-0500 [conn2] CMD: dropIndexes admin.system.new_users m31001| 2013-12-18T15:17:21.313-0500 [conn2] warning: Auth schema upgrade failed to drop indexes on admin.system.new_users (Unknown error code dropIndexes failed) m31001| 2013-12-18T15:17:21.313-0500 [conn2] Auth schema upgrade erasing contents of admin.system.new_users m31001| 2013-12-18T15:17:21.313-0500 [conn2] Auth schema upgrade creating needed indexes of admin.system.new_users m31001| 2013-12-18T15:17:21.314-0500 [conn2] build index on: admin.system.new_users properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "admin.system.new_users" } m31001| 2013-12-18T15:17:21.314-0500 [conn2] build index done. scanned 0 total records. 0 secs m31001| 2013-12-18T15:17:21.314-0500 [conn2] build index on: admin.system.new_users properties: { v: 1, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.new_users" } m31001| 2013-12-18T15:17:21.314-0500 [conn2] build index done. scanned 0 total records. 0 secs m31001| 2013-12-18T15:17:21.314-0500 [conn2] Auth schema upgrade processing schema version 1 users from database admin m31001| 2013-12-18T15:17:21.315-0500 [conn2] Auth schema upgrade processing schema version 1 users from database local m31001| 2013-12-18T15:17:21.315-0500 [conn2] Auth schema upgrade processing schema version 1 users from database test m31001| 2013-12-18T15:17:21.319-0500 [conn2] Auth schema upgrade erasing version 1 users from admin.system.users m31001| 2013-12-18T15:17:21.319-0500 [conn2] CMD: dropIndexes admin.system.users m31001| 2013-12-18T15:17:21.321-0500 [conn2] Auth schema upgrade erasing admin.system.roles m31001| 2013-12-18T15:17:21.321-0500 [conn2] CMD: dropIndexes admin.system.roles m31001| 2013-12-18T15:17:21.321-0500 [conn2] warning: Auth schema upgrade failed to drop indexes on admin.system.roles (Unknown error code dropIndexes failed) m31001| 2013-12-18T15:17:21.321-0500 [conn2] Auth schema upgrade creating needed indexes of admin.system.roles m31001| 2013-12-18T15:17:21.321-0500 [conn2] build index on: admin.system.roles properties: { v: 1, unique: true, key: { role: 1, db: 1 }, name: "role_1_db_1", ns: "admin.system.roles" } m31001| 2013-12-18T15:17:21.321-0500 [conn2] build index done. scanned 0 total records. 0 secs m31001| 2013-12-18T15:17:21.321-0500 [conn2] build index on: admin.system.roles properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "admin.system.roles" } m31001| 2013-12-18T15:17:21.322-0500 [conn2] build index done. scanned 0 total records. 0 secs m31001| 2013-12-18T15:17:21.322-0500 [conn2] Auth schema upgrade creating needed indexes of admin.system.users m31001| 2013-12-18T15:17:21.322-0500 [conn2] build index on: admin.system.users properties: { v: 1, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" } m31001| 2013-12-18T15:17:21.322-0500 [conn2] build index done. scanned 0 total records. 0 secs m31001| 2013-12-18T15:17:21.322-0500 [conn2] Auth schema upgrade copying version 3 users from admin.system.new_users to admin.system.users m31002| Wed Dec 18 15:17:21.465 [rsHealthPoll] replSet member specter.local:31001 is now in state PRIMARY m31002| Wed Dec 18 15:17:23.036 [conn4] end connection 10.4.101.171:61312 (2 connections now open) m31000| Wed Dec 18 15:17:23.037 [rsHealthPoll] replSet member specter.local:31001 is now in state PRIMARY m31002| Wed Dec 18 15:17:23.037 [initandlisten] connection accepted from 10.4.101.171:61339 #6 (3 connections now open) m31002| Wed Dec 18 15:17:23.037 [conn6] authenticate db: local { authenticate: 1, nonce: "ff50789a5b2f5bb8", user: "__system", key: "b48d47640ad86af5c9be78da55cf435a" } m31000| Wed Dec 18 15:17:23.044 [rsBackgroundSync] replSet syncing to: specter.local:31001 m31001| 2013-12-18T15:17:23.044-0500 [initandlisten] connection accepted from 10.4.101.171:61340 #6 (4 connections now open) m31001| 2013-12-18T15:17:23.045-0500 [conn6] authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" } m31000| Wed Dec 18 15:17:23.046 [rsSync] build index local.replset.minvalid { _id: 1 } m31000| Wed Dec 18 15:17:23.046 [rsSync] build index done. scanned 0 total records. 0 secs m31000| Wed Dec 18 15:17:23.047 [repl writer worker 1] build index admin.system.version { _id: 1 } m31000| Wed Dec 18 15:17:23.048 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31000| Wed Dec 18 15:17:23.049 [repl writer worker 1] build index admin.system.backup_users { _id: 1 } m31000| Wed Dec 18 15:17:23.050 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31002| Wed Dec 18 15:17:23.050 [repl writer worker 1] build index admin.system.version { _id: 1 } m31002| Wed Dec 18 15:17:23.051 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31000| Wed Dec 18 15:17:23.051 [repl writer worker 1] build index admin.system.new_users { _id: 1 } m31002| Wed Dec 18 15:17:23.051 [rsSyncNotifier] Socket recv() errno:54 Connection reset by peer 10.4.101.171:31000 m31002| Wed Dec 18 15:17:23.051 [rsSyncNotifier] SocketException: remote: 10.4.101.171:31000 error: 9001 socket exception [RECV_ERROR] server [10.4.101.171:31000] m31002| Wed Dec 18 15:17:23.051 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: specter.local:31000 m31000| Wed Dec 18 15:17:23.051 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31000| Wed Dec 18 15:17:23.051 [repl writer worker 1] info: creating collection admin.system.new_users on add index m31000| Wed Dec 18 15:17:23.051 [repl writer worker 1] build index admin.system.new_users { user: 1, db: 1 } m31002| Wed Dec 18 15:17:23.051 [repl writer worker 1] build index admin.system.backup_users { _id: 1 } m31000| Wed Dec 18 15:17:23.052 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31002| Wed Dec 18 15:17:23.052 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31002| Wed Dec 18 15:17:23.053 [repl writer worker 1] build index admin.system.new_users { _id: 1 } m31000| Wed Dec 18 15:17:23.053 [repl writer worker 2] CMD: dropIndexes admin.system.users m31002| Wed Dec 18 15:17:23.054 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31002| Wed Dec 18 15:17:23.054 [repl writer worker 1] info: creating collection admin.system.new_users on add index m31002| Wed Dec 18 15:17:23.054 [repl writer worker 1] build index admin.system.new_users { user: 1, db: 1 } m31002| Wed Dec 18 15:17:23.055 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31000| Wed Dec 18 15:17:23.055 [repl writer worker 2] build index admin.system.roles { _id: 1 } m31000| Wed Dec 18 15:17:23.055 [repl writer worker 2] build index done. scanned 0 total records. 0 secs m31000| Wed Dec 18 15:17:23.055 [repl writer worker 2] info: creating collection admin.system.roles on add index m31000| Wed Dec 18 15:17:23.055 [repl writer worker 2] build index admin.system.roles { role: 1, db: 1 } m31000| Wed Dec 18 15:17:23.056 [repl writer worker 2] build index done. scanned 0 total records. 0 secs m31000| Wed Dec 18 15:17:23.056 [repl writer worker 2] build index admin.system.users { user: 1, db: 1 } m31000| Wed Dec 18 15:17:23.057 [repl writer worker 2] build index done. scanned 0 total records. 0 secs m31002| Wed Dec 18 15:17:23.057 [repl writer worker 2] CMD: dropIndexes admin.system.users m31000| Wed Dec 18 15:17:23.058 [repl writer worker 3] ERROR: writer worker caught exception: system.users entry must have either a 'pwd' field or a 'userSource' field, but not both on: { ts: Timestamp 1387397841000|11, h: 223338476503781983, v: 2, op: "i", ns: "admin.system.users", o: { _id: "admin.admin", user: "admin", db: "admin", credentials: { MONGODB-CR: "3dfa1231d2c5c39175c1de49530c0a65" }, roles: [ { role: "userAdminAnyDatabase", db: "admin" }, { role: "readWriteAnyDatabase", db: "admin" }, { role: "dbAdminAnyDatabase", db: "admin" }, { role: "clusterAdmin", db: "admin" } ] } } m31000| Wed Dec 18 15:17:23.058 [repl writer worker 3] Fatal Assertion 16360 m31000| 0x10044c60b 0x100425837 0x10033c97f 0x10042cc48 0x10047f1a5 0x7fff8c3e8772 0x7fff8c3d51a1 m31002| Wed Dec 18 15:17:23.058 [repl writer worker 2] build index admin.system.roles { _id: 1 } m31002| Wed Dec 18 15:17:23.059 [repl writer worker 2] build index done. scanned 0 total records. 0 secs m31002| Wed Dec 18 15:17:23.059 [repl writer worker 2] info: creating collection admin.system.roles on add index m31002| Wed Dec 18 15:17:23.059 [repl writer worker 2] build index admin.system.roles { role: 1, db: 1 } m31002| Wed Dec 18 15:17:23.060 [repl writer worker 2] build index done. scanned 0 total records. 0 secs m31002| Wed Dec 18 15:17:23.060 [repl writer worker 2] build index admin.system.users { user: 1, db: 1 } m31002| Wed Dec 18 15:17:23.061 [repl writer worker 2] build index done. scanned 0 total records. 0 secs m31000| 0 mongod-248 0x000000010044c60b _ZN5mongo15printStackTraceERSo + 43 m31000| 1 mongod-248 0x0000000100425837 _ZN5mongo13fassertFailedEi + 151 m31000| 2 mongod-248 0x000000010033c97f _ZN5mongo7replset14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE + 271 m31000| 3 mongod-248 0x000000010042cc48 _ZN5mongo10threadpool6Worker4loopEv + 138 m31000| 4 mongod-248 0x000000010047f1a5 thread_proxy + 229 m31000| 5 libsystem_c.dylib 0x00007fff8c3e8772 _pthread_start + 327 m31000| 6 libsystem_c.dylib 0x00007fff8c3d51a1 thread_start + 13 m31000| Wed Dec 18 15:17:23.061 [repl writer worker 3] m31000| m31000| ***aborting after fassert() failure m31000| m31000| m31000| Wed Dec 18 15:17:23.061 Got signal: 6 (Abort trap: 6). m31000| m31000| Wed Dec 18 15:17:23.063 Backtrace: m31000| 0x10044c60b 0x100001121 0x7fff8c3d690a 0 0x7fff8c42df61 0x100425875 0x10033c97f 0x10042cc48 0x10047f1a5 0x7fff8c3e8772 0x7fff8c3d51a1 m31000| 0 mongod-248 0x000000010044c60b _ZN5mongo15printStackTraceERSo + 43 m31000| 1 mongod-248 0x0000000100001121 _ZN5mongo10abruptQuitEi + 225 m31000| 2 libsystem_c.dylib 0x00007fff8c3d690a _sigtramp + 26 m31000| 3 ??? 0x0000000000000000 0x0 + 0 m31000| 4 libsystem_c.dylib 0x00007fff8c42df61 abort + 143 m31000| 5 mongod-248 0x0000000100425875 _ZN5mongo13fassertFailedEi + 213 m31000| 6 mongod-248 0x000000010033c97f _ZN5mongo7replset14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE + 271 m31000| 7 mongod-248 0x000000010042cc48 _ZN5mongo10threadpool6Worker4loopEv + 138 m31000| 8 mongod-248 0x000000010047f1a5 thread_proxy + 229 m31000| 9 libsystem_c.dylib 0x00007fff8c3e8772 _pthread_start + 327 m31000| 10 libsystem_c.dylib 0x00007fff8c3d51a1 thread_start + 13 m31000| m31002| Wed Dec 18 15:17:23.067 [rsBackgroundSync] replSet sync source problem: 10278 dbclient error communicating with server: specter.local:31000 m31002| Wed Dec 18 15:17:23.067 [rsBackgroundSync] replSet syncing to: specter.local:31001 m31002| Wed Dec 18 15:17:23.067 [conn6] end connection 10.4.101.171:61339 (2 connections now open) m31001| 2013-12-18T15:17:23.067-0500 [conn5] end connection 10.4.101.171:61328 (3 connections now open) m31001| 2013-12-18T15:17:23.068-0500 [initandlisten] connection accepted from 10.4.101.171:61341 #7 (4 connections now open) m31001| 2013-12-18T15:17:23.068-0500 [conn7] authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" } m31002| Wed Dec 18 15:17:23.069 [repl writer worker 3] ERROR: writer worker caught exception: system.users entry must have either a 'pwd' field or a 'userSource' field, but not both on: { ts: Timestamp 1387397841000|11, h: 223338476503781983, v: 2, op: "i", ns: "admin.system.users", o: { _id: "admin.admin", user: "admin", db: "admin", credentials: { MONGODB-CR: "3dfa1231d2c5c39175c1de49530c0a65" }, roles: [ { role: "userAdminAnyDatabase", db: "admin" }, { role: "readWriteAnyDatabase", db: "admin" }, { role: "dbAdminAnyDatabase", db: "admin" }, { role: "clusterAdmin", db: "admin" } ] } } m31002| Wed Dec 18 15:17:23.069 [repl writer worker 3] Fatal Assertion 16360 m31002| 0x10044c60b 0x100425837 0x10033c97f 0x10042cc48 0x10047f1a5 0x7fff8c3e8772 0x7fff8c3d51a1 m31002| 0 mongod-248 0x000000010044c60b _ZN5mongo15printStackTraceERSo + 43 m31002| 1 mongod-248 0x0000000100425837 _ZN5mongo13fassertFailedEi + 151 m31002| 2 mongod-248 0x000000010033c97f _ZN5mongo7replset14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE + 271 m31002| 3 mongod-248 0x000000010042cc48 _ZN5mongo10threadpool6Worker4loopEv + 138 m31002| 4 mongod-248 0x000000010047f1a5 thread_proxy + 229 m31002| 5 libsystem_c.dylib 0x00007fff8c3e8772 _pthread_start + 327 m31002| 6 libsystem_c.dylib 0x00007fff8c3d51a1 thread_start + 13 m31002| Wed Dec 18 15:17:23.072 [repl writer worker 3] m31002| m31002| ***aborting after fassert() failure m31002| m31002| m31002| Wed Dec 18 15:17:23.072 Got signal: 6 (Abort trap: 6). m31002| m31002| Wed Dec 18 15:17:23.074 Backtrace: m31002| 0x10044c60b 0x100001121 0x7fff8c3d690a 0 0x7fff8c42df61 0x100425875 0x10033c97f 0x10042cc48 0x10047f1a5 0x7fff8c3e8772 0x7fff8c3d51a1 m31002| 0 mongod-248 0x000000010044c60b _ZN5mongo15printStackTraceERSo + 43 m31002| 1 mongod-248 0x0000000100001121 _ZN5mongo10abruptQuitEi + 225 m31002| 2 libsystem_c.dylib 0x00007fff8c3d690a _sigtramp + 26 m31002| 3 ??? 0x0000000000000000 0x0 + 0 m31002| 4 libsystem_c.dylib 0x00007fff8c42df61 abort + 143 m31002| 5 mongod-248 0x0000000100425875 _ZN5mongo13fassertFailedEi + 213 m31002| 6 mongod-248 0x000000010033c97f _ZN5mongo7replset14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE + 271 m31002| 7 mongod-248 0x000000010042cc48 _ZN5mongo10threadpool6Worker4loopEv + 138 m31002| 8 mongod-248 0x000000010047f1a5 thread_proxy + 229 m31002| 9 libsystem_c.dylib 0x00007fff8c3e8772 _pthread_start + 327 m31002| 10 libsystem_c.dylib 0x00007fff8c3d51a1 thread_start + 13 m31002| m31001| 2013-12-18T15:17:23.079-0500 [conn4] end connection 10.4.101.171:61322 (3 connections now open) m31001| 2013-12-18T15:17:23.402-0500 [rsHealthPoll] DBClientCursor::init call() failed m31001| 2013-12-18T15:17:23.402-0500 [rsHealthPoll] replset info specter.local:31002 heartbeat failed, retrying m31001| 2013-12-18T15:17:23.403-0500 [rsHealthPoll] warning: Failed to connect to 10.4.101.171:31002, reason: errno:61 Connection refused m31001| 2013-12-18T15:17:23.403-0500 [rsHealthPoll] DBClientCursor::init call() failed m31001| 2013-12-18T15:17:23.404-0500 [rsHealthPoll] replset info specter.local:31000 heartbeat failed, retrying m31001| 2013-12-18T15:17:23.404-0500 [rsHealthPoll] replset info specter.local:31002 just heartbeated us, but our heartbeat failed: , not changing state m31001| 2013-12-18T15:17:23.405-0500 [rsHealthPoll] warning: Failed to connect to 10.4.101.171:31000, reason: errno:61 Connection refused m31001| 2013-12-18T15:17:23.405-0500 [rsHealthPoll] replset info specter.local:31000 just heartbeated us, but our heartbeat failed: , not changing state m31001| 2013-12-18T15:17:25.407-0500 [rsHealthPoll] warning: Failed to connect to 10.4.101.171:31002, reason: errno:61 Connection refused m31001| 2013-12-18T15:17:25.407-0500 [rsHealthPoll] replset info specter.local:31002 heartbeat failed, retrying m31001| 2013-12-18T15:17:25.410-0500 [rsHealthPoll] warning: Failed to connect to 10.4.101.171:31000, reason: errno:61 Connection refused m31001| 2013-12-18T15:17:25.410-0500 [rsHealthPoll] replset info specter.local:31000 heartbeat failed, retrying m31001| 2013-12-18T15:17:25.412-0500 [rsHealthPoll] warning: Failed to connect to 10.4.101.171:31002, reason: errno:61 Connection refused m31001| 2013-12-18T15:17:25.412-0500 [rsHealthPoll] replSet info specter.local:31002 is down (or slow to respond): m31001| 2013-12-18T15:17:25.412-0500 [rsHealthPoll] replSet member specter.local:31002 is now in state DOWN m31001| 2013-12-18T15:17:25.415-0500 [rsHealthPoll] warning: Failed to connect to 10.4.101.171:31000, reason: errno:61 Connection refused m31001| 2013-12-18T15:17:25.415-0500 [rsHealthPoll] replset info specter.local:31000 just heartbeated us, but our heartbeat failed: , not changing state m31001| 2013-12-18T15:17:27.422-0500 [rsHealthPoll] warning: Failed to connect to 10.4.101.171:31002, reason: errno:61 Connection refused m31001| 2013-12-18T15:17:27.422-0500 [rsHealthPoll] replset info specter.local:31002 heartbeat failed, retrying m31001| 2013-12-18T15:17:27.424-0500 [rsHealthPoll] warning: Failed to connect to 10.4.101.171:31000, reason: errno:61 Connection refused m31001| 2013-12-18T15:17:27.424-0500 [rsHealthPoll] replset info specter.local:31000 heartbeat failed, retrying m31001| 2013-12-18T15:17:27.440-0500 [rsHealthPoll] warning: Failed to connect to 10.4.101.171:31002, reason: errno:61 Connection refused m31001| 2013-12-18T15:17:27.441-0500 [rsHealthPoll] warning: Failed to connect to 10.4.101.171:31000, reason: errno:61 Connection refused m31001| 2013-12-18T15:17:27.441-0500 [rsHealthPoll] replSet info specter.local:31000 is down (or slow to respond): m31001| 2013-12-18T15:17:27.441-0500 [rsHealthPoll] replSet member specter.local:31000 is now in state DOWN m31001| 2013-12-18T15:17:27.441-0500 [rsMgr] can't see a majority of the set, relinquishing primary m31001| 2013-12-18T15:17:27.441-0500 [rsMgr] replSet relinquishing primary state m31001| 2013-12-18T15:17:27.441-0500 [rsMgr] replSet SECONDARY m31001| 2013-12-18T15:17:27.441-0500 [rsMgr] replSet closing client sockets after relinquishing primary 2013-12-18T15:17:27.442-0500 DBClientCursor::init call() failed m31001| 2013-12-18T15:17:27.442-0500 [conn2] command admin.$cmd command: { getLastError: 1, w: "majority", wtimeout: 30000.0 } ntoreturn:1 keyUpdates:0 reslen:150 6112ms m31001| 2013-12-18T15:17:27.442-0500 [conn2] command test.$cmd command: { createRole: "developer", roles: [ { role: "read", db: "test" } ], privileges: [], writeConcern: { w: "majority", wtimeout: 30000.0 } } keyUpdates:0 locks(micros) r:260 reslen:106 6114ms m31001| 2013-12-18T15:17:27.442-0500 [conn2] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [127.0.0.1:61318] 2013-12-18T15:17:27.442-0500 Error: error doing query: failed at src/mongo/shell/collection.js:55 failed to load: repl_upgrade_24_secondaries.js m31001| 2013-12-18T15:17:27.442-0500 [signalProcessingThread] got signal 15 (Terminated: 15), will terminate after current cmd ends m31001| 2013-12-18T15:17:27.442-0500 [signalProcessingThread] now exiting m31001| dbexit: 2013-12-18T15:17:27.442-0500 [signalProcessingThread] shutdown: going to close listening sockets... m31001| 2013-12-18T15:17:27.442-0500 [signalProcessingThread] closing listening socket: 22 m31001| 2013-12-18T15:17:27.442-0500 [signalProcessingThread] closing listening socket: 23 m31001| 2013-12-18T15:17:27.443-0500 [signalProcessingThread] closing listening socket: 26 m31001| 2013-12-18T15:17:27.443-0500 [signalProcessingThread] removing socket file: /tmp/mongodb-31001.sock m31001| 2013-12-18T15:17:27.443-0500 [signalProcessingThread] shutdown: going to flush diaglog... m31001| 2013-12-18T15:17:27.443-0500 [signalProcessingThread] shutdown: going to close sockets... m31001| 2013-12-18T15:17:27.443-0500 [signalProcessingThread] shutdown: waiting for fs preallocator... m31001| 2013-12-18T15:17:27.443-0500 [signalProcessingThread] shutdown: lock for final commit... m31001| 2013-12-18T15:17:27.443-0500 [signalProcessingThread] shutdown: final commit... m31001| 2013-12-18T15:17:27.469-0500 [signalProcessingThread] shutdown: closing all files... m31001| 2013-12-18T15:17:27.469-0500 [signalProcessingThread] closeAllFiles() finished m31001| 2013-12-18T15:17:27.469-0500 [signalProcessingThread] journalCleanup... m31001| 2013-12-18T15:17:27.469-0500 [signalProcessingThread] removeJournalFiles m31001| 2013-12-18T15:17:27.470-0500 [signalProcessingThread] shutdown: removing fs lock... m31001| dbexit: really exiting now