2016-01-28T11:16:10.705-0500 I CONTROL [initandlisten] MongoDB starting : pid=28848 port=50016 dbpath=/tmp/mms-automation/test/output/data/process9008.50016 64-bit host=neurofunk.local 2016-01-28T11:16:10.709-0500 I CONTROL [initandlisten] db version v3.2.1-95-g4a3c6e6 2016-01-28T11:16:10.709-0500 I CONTROL [initandlisten] git version: 4a3c6e62882269432e8df8c19675bde716f38d50 2016-01-28T11:16:10.709-0500 I CONTROL [initandlisten] allocator: system 2016-01-28T11:16:10.709-0500 I CONTROL [initandlisten] modules: none 2016-01-28T11:16:10.709-0500 I CONTROL [initandlisten] build environment: 2016-01-28T11:16:10.709-0500 I CONTROL [initandlisten] distarch: x86_64 2016-01-28T11:16:10.710-0500 I CONTROL [initandlisten] target_arch: x86_64 2016-01-28T11:16:10.710-0500 I CONTROL [initandlisten] options: { config: "/tmp/mms-automation/test/output/data/process9008.50016/automation-mongod.conf", net: { port: 50016 }, processManagement: { fork: true }, replication: { oplogSizeMB: 64, replSetName: "csrs" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/tmp/mms-automation/test/output/data/process9008.50016", journal: { enabled: false }, mmapv1: { preallocDataFiles: true, smallFiles: true } }, systemLog: { destination: "file", path: "/tmp/mms-automation/test/logs/run9008" } } 2016-01-28T11:16:10.713-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),,log=(enabled=false), 2016-01-28T11:16:10.963-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger 2016-01-28T11:16:11.093-0500 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument: Did not find replica set lastVote document in local.replset.election 2016-01-28T11:16:11.094-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset 2016-01-28T11:16:11.094-0500 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker 2016-01-28T11:16:11.094-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/tmp/mms-automation/test/output/data/process9008.50016/diagnostic.data' 2016-01-28T11:16:11.224-0500 I NETWORK [initandlisten] waiting for connections on port 50016 2016-01-28T11:16:11.279-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50024 #1 (1 connection now open) 2016-01-28T11:16:11.371-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50034 #2 (2 connections now open) 2016-01-28T11:16:25.971-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50270 #3 (3 connections now open) 2016-01-28T11:16:26.376-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50272 #4 (4 connections now open) 2016-01-28T11:18:43.372-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52232 #5 (5 connections now open) 2016-01-28T11:18:43.373-0500 I NETWORK [conn5] end connection 127.0.0.1:52232 (4 connections now open) 2016-01-28T11:18:43.377-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52234 #6 (5 connections now open) 2016-01-28T11:18:43.380-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to cfg-9007-alias.lvh.me:9007 2016-01-28T11:18:43.383-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52241 #7 (6 connections now open) 2016-01-28T11:18:43.384-0500 I NETWORK [conn7] end connection 127.0.0.1:52241 (5 connections now open) 2016-01-28T11:18:43.444-0500 I REPL [replExecDBWorker-0] Starting replication applier threads 2016-01-28T11:18:43.445-0500 W REPL [rsSync] did not receive a valid config yet 2016-01-28T11:18:43.445-0500 I REPL [ReplicationExecutor] New replica set config in use: { _id: "csrs", version: 2, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "cfg-9007-alias.lvh.me:9007", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "cfg-9008-alias.lvh.me:50016", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 0 }, { _id: 2, host: "cfg-9009-alias.lvh.me:50015", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 0 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } 2016-01-28T11:18:43.445-0500 I REPL [ReplicationExecutor] This node is cfg-9008-alias.lvh.me:50016 in the config 2016-01-28T11:18:43.445-0500 I REPL [ReplicationExecutor] transition to STARTUP2 2016-01-28T11:18:43.446-0500 I REPL [ReplicationExecutor] Member cfg-9007-alias.lvh.me:9007 is now in state PRIMARY 2016-01-28T11:18:43.448-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to cfg-9009-alias.lvh.me:50015 2016-01-28T11:18:43.448-0500 I REPL [ReplicationExecutor] Member cfg-9009-alias.lvh.me:50015 is now in state STARTUP 2016-01-28T11:18:43.450-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52243 #8 (6 connections now open) 2016-01-28T11:18:43.451-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52244 #9 (7 connections now open) 2016-01-28T11:18:44.448-0500 I REPL [rsSync] ****** 2016-01-28T11:18:44.448-0500 I REPL [rsSync] creating replication oplog of size: 64MB... 2016-01-28T11:18:44.479-0500 I STORAGE [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs 2016-01-28T11:18:44.479-0500 I STORAGE [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes 2016-01-28T11:18:44.480-0500 I STORAGE [rsSync] Scanning the oplog to determine where to place markers for truncation 2016-01-28T11:18:44.576-0500 I REPL [rsSync] ****** 2016-01-28T11:18:44.576-0500 I REPL [rsSync] initial sync pending 2016-01-28T11:18:44.637-0500 I REPL [rsSync] no valid sync sources found in current replset to do an initial sync 2016-01-28T11:18:45.641-0500 I REPL [rsSync] initial sync pending 2016-01-28T11:18:45.641-0500 I REPL [rsSync] no valid sync sources found in current replset to do an initial sync 2016-01-28T11:18:46.645-0500 I REPL [rsSync] initial sync pending 2016-01-28T11:18:46.645-0500 I REPL [rsSync] no valid sync sources found in current replset to do an initial sync 2016-01-28T11:18:47.650-0500 I REPL [rsSync] initial sync pending 2016-01-28T11:18:47.650-0500 I REPL [rsSync] no valid sync sources found in current replset to do an initial sync 2016-01-28T11:18:48.451-0500 I REPL [ReplicationExecutor] Member cfg-9009-alias.lvh.me:50015 is now in state STARTUP2 2016-01-28T11:18:48.654-0500 I REPL [rsSync] initial sync pending 2016-01-28T11:18:48.654-0500 I REPL [ReplicationExecutor] syncing from: cfg-9007-alias.lvh.me:9007 2016-01-28T11:18:48.744-0500 I REPL [rsSync] initial sync drop all databases 2016-01-28T11:18:48.744-0500 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 2016-01-28T11:18:48.744-0500 I REPL [rsSync] initial sync clone all databases 2016-01-28T11:18:48.745-0500 I REPL [rsSync] initial sync cloning db: config 2016-01-28T11:18:48.805-0500 I INDEX [rsSync] build index on: config.shards properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2016-01-28T11:18:48.805-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:48.812-0500 I INDEX [rsSync] build index done. scanned 2 total records. 0 secs 2016-01-28T11:18:48.877-0500 I INDEX [rsSync] build index on: config.actionlog properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.actionlog" } 2016-01-28T11:18:48.877-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:48.885-0500 I INDEX [rsSync] build index done. scanned 4 total records. 0 secs 2016-01-28T11:18:48.957-0500 I INDEX [rsSync] build index on: config.chunks properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2016-01-28T11:18:48.958-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:48.967-0500 I INDEX [rsSync] build index done. scanned 4 total records. 0 secs 2016-01-28T11:18:49.032-0500 I INDEX [rsSync] build index on: config.mongos properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.mongos" } 2016-01-28T11:18:49.032-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.039-0500 I INDEX [rsSync] build index done. scanned 2 total records. 0 secs 2016-01-28T11:18:49.102-0500 I INDEX [rsSync] build index on: config.collections properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.collections" } 2016-01-28T11:18:49.103-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.110-0500 I INDEX [rsSync] build index done. scanned 1 total records. 0 secs 2016-01-28T11:18:49.171-0500 I INDEX [rsSync] build index on: config.lockpings properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" } 2016-01-28T11:18:49.171-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.178-0500 I INDEX [rsSync] build index done. scanned 4 total records. 0 secs 2016-01-28T11:18:49.238-0500 I INDEX [rsSync] build index on: config.settings properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.settings" } 2016-01-28T11:18:49.238-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.245-0500 I INDEX [rsSync] build index done. scanned 2 total records. 0 secs 2016-01-28T11:18:49.308-0500 I INDEX [rsSync] build index on: config.version properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.version" } 2016-01-28T11:18:49.308-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.317-0500 I INDEX [rsSync] build index done. scanned 1 total records. 0 secs 2016-01-28T11:18:49.380-0500 I INDEX [rsSync] build index on: config.locks properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } 2016-01-28T11:18:49.380-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.388-0500 I INDEX [rsSync] build index done. scanned 4 total records. 0 secs 2016-01-28T11:18:49.449-0500 I INDEX [rsSync] build index on: config.databases properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.databases" } 2016-01-28T11:18:49.449-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.457-0500 I INDEX [rsSync] build index done. scanned 1 total records. 0 secs 2016-01-28T11:18:49.519-0500 I INDEX [rsSync] build index on: config.tags properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.tags" } 2016-01-28T11:18:49.519-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.526-0500 I INDEX [rsSync] build index done. scanned 0 total records. 0 secs 2016-01-28T11:18:49.590-0500 I INDEX [rsSync] build index on: config.changelog properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.changelog" } 2016-01-28T11:18:49.591-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.598-0500 I INDEX [rsSync] build index done. scanned 17 total records. 0 secs 2016-01-28T11:18:49.599-0500 I REPL [rsSync] initial sync data copy, starting syncup 2016-01-28T11:18:49.599-0500 I REPL [rsSync] oplog sync 1 of 3 2016-01-28T11:18:49.600-0500 I REPL [rsSync] oplog sync 2 of 3 2016-01-28T11:18:49.600-0500 I REPL [rsSync] initial sync building indexes 2016-01-28T11:18:49.601-0500 I REPL [rsSync] initial sync cloning indexes for : config 2016-01-28T11:18:49.603-0500 I STORAGE [rsSync] copying indexes for: { name: "shards", options: {} } 2016-01-28T11:18:49.635-0500 I INDEX [rsSync] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2016-01-28T11:18:49.636-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.643-0500 I INDEX [rsSync] build index done. scanned 2 total records. 0 secs 2016-01-28T11:18:49.643-0500 I STORAGE [rsSync] copying indexes for: { name: "actionlog", options: { capped: true, size: 2097152 } } 2016-01-28T11:18:49.644-0500 I STORAGE [rsSync] copying indexes for: { name: "chunks", options: {} } 2016-01-28T11:18:49.675-0500 I INDEX [rsSync] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2016-01-28T11:18:49.675-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.707-0500 I INDEX [rsSync] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2016-01-28T11:18:49.708-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.740-0500 I INDEX [rsSync] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2016-01-28T11:18:49.740-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.762-0500 I INDEX [rsSync] build index done. scanned 4 total records. 0 secs 2016-01-28T11:18:49.763-0500 I STORAGE [rsSync] copying indexes for: { name: "mongos", options: {} } 2016-01-28T11:18:49.763-0500 I STORAGE [rsSync] copying indexes for: { name: "collections", options: {} } 2016-01-28T11:18:49.764-0500 I STORAGE [rsSync] copying indexes for: { name: "lockpings", options: {} } 2016-01-28T11:18:49.791-0500 I INDEX [rsSync] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" } 2016-01-28T11:18:49.792-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.800-0500 I INDEX [rsSync] build index done. scanned 4 total records. 0 secs 2016-01-28T11:18:49.801-0500 I STORAGE [rsSync] copying indexes for: { name: "settings", options: {} } 2016-01-28T11:18:49.801-0500 I STORAGE [rsSync] copying indexes for: { name: "version", options: {} } 2016-01-28T11:18:49.801-0500 I STORAGE [rsSync] copying indexes for: { name: "locks", options: {} } 2016-01-28T11:18:49.830-0500 I INDEX [rsSync] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } 2016-01-28T11:18:49.830-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.861-0500 I INDEX [rsSync] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } 2016-01-28T11:18:49.862-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.877-0500 I INDEX [rsSync] build index done. scanned 4 total records. 0 secs 2016-01-28T11:18:49.877-0500 I STORAGE [rsSync] copying indexes for: { name: "databases", options: {} } 2016-01-28T11:18:49.878-0500 I STORAGE [rsSync] copying indexes for: { name: "tags", options: {} } 2016-01-28T11:18:49.898-0500 I INDEX [rsSync] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" } 2016-01-28T11:18:49.899-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.906-0500 I INDEX [rsSync] build index done. scanned 0 total records. 0 secs 2016-01-28T11:18:49.907-0500 I STORAGE [rsSync] copying indexes for: { name: "changelog", options: { capped: true, size: 10485760 } } 2016-01-28T11:18:49.910-0500 I REPL [rsSync] oplog sync 3 of 3 2016-01-28T11:18:49.911-0500 I REPL [rsSync] initial sync finishing up 2016-01-28T11:18:49.911-0500 I REPL [rsSync] set minValid=(term: 1, timestamp: Jan 28 11:18:45:1) 2016-01-28T11:18:50.464-0500 I REPL [ReplicationExecutor] could not find member to sync from 2016-01-28T11:18:50.465-0500 W REPL [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it. 2016-01-28T11:18:50.485-0500 I REPL [rsSync] initial sync done 2016-01-28T11:18:50.486-0500 I REPL [ReplicationExecutor] transition to RECOVERING 2016-01-28T11:18:50.487-0500 I REPL [ReplicationExecutor] transition to SECONDARY 2016-01-28T11:18:53.523-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52464 #10 (8 connections now open) 2016-01-28T11:18:53.523-0500 I NETWORK [conn10] end connection 127.0.0.1:52464 (7 connections now open) 2016-01-28T11:18:53.529-0500 I REPL [ReplicationExecutor] New replica set config in use: { _id: "csrs", version: 3, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "cfg-9007-alias.lvh.me:9007", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "cfg-9008-alias.lvh.me:50016", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "cfg-9009-alias.lvh.me:50015", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } 2016-01-28T11:18:53.530-0500 I REPL [ReplicationExecutor] This node is cfg-9008-alias.lvh.me:50016 in the config 2016-01-28T11:18:53.530-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52470 #11 (8 connections now open) 2016-01-28T11:18:53.530-0500 W REPL [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it. 2016-01-28T11:18:53.531-0500 I NETWORK [conn11] end connection 127.0.0.1:52470 (7 connections now open) 2016-01-28T11:18:53.531-0500 I REPL [ReplicationExecutor] Member cfg-9009-alias.lvh.me:50015 is now in state SECONDARY 2016-01-28T11:18:53.531-0500 I NETWORK [conn9] end connection 127.0.0.1:52244 (6 connections now open) 2016-01-28T11:18:54.472-0500 I REPL [ReplicationExecutor] syncing from: cfg-9007-alias.lvh.me:9007 2016-01-28T11:18:55.573-0500 I NETWORK [rsBackgroundSync] Socket recv() errno:54 Connection reset by peer 127.0.0.1:9007 2016-01-28T11:18:55.574-0500 I NETWORK [rsBackgroundSync] SocketException: remote: (NONE):0 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:9007] 2016-01-28T11:18:55.574-0500 E REPL [rsBackgroundSync] network error while attempting to run command 'isMaster' on host 'cfg-9007-alias.lvh.me:9007' 2016-01-28T11:18:55.575-0500 I REPL [ReplicationExecutor] could not find member to sync from 2016-01-28T11:18:55.576-0500 W REPL [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it. 2016-01-28T11:18:55.576-0500 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to cfg-9007-alias.lvh.me:9007 2016-01-28T11:18:55.577-0500 I ASIO [ReplicationExecutor] after drop, pool was empty, going to spawn some connections 2016-01-28T11:18:55.581-0500 I REPL [ReplicationExecutor] Error in heartbeat request to cfg-9007-alias.lvh.me:9007; HostUnreachable: Connection refused 2016-01-28T11:18:55.584-0500 I REPL [ReplicationExecutor] Error in heartbeat request to cfg-9007-alias.lvh.me:9007; HostUnreachable: Connection refused 2016-01-28T11:18:55.589-0500 I REPL [ReplicationExecutor] Error in heartbeat request to cfg-9007-alias.lvh.me:9007; HostUnreachable: Connection refused 2016-01-28T11:18:55.787-0500 I NETWORK [conn6] end connection 127.0.0.1:52234 (5 connections now open) 2016-01-28T11:18:56.215-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52649 #12 (6 connections now open) 2016-01-28T11:18:56.217-0500 I NETWORK [conn12] end connection 127.0.0.1:52649 (5 connections now open) 2016-01-28T11:18:56.224-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52651 #13 (6 connections now open) 2016-01-28T11:18:56.380-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52661 #14 (7 connections now open) 2016-01-28T11:18:56.394-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52666 #15 (8 connections now open) 2016-01-28T11:18:56.396-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52668 #16 (9 connections now open) 2016-01-28T11:18:56.398-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52670 #17 (10 connections now open) 2016-01-28T11:18:57.641-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52707 #18 (11 connections now open) 2016-01-28T11:18:57.642-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52708 #19 (12 connections now open) 2016-01-28T11:18:57.957-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52713 #20 (13 connections now open) 2016-01-28T11:18:58.188-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52722 #21 (14 connections now open) 2016-01-28T11:18:58.201-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52723 #22 (15 connections now open) 2016-01-28T11:18:59.718-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52801 #23 (16 connections now open) 2016-01-28T11:19:00.595-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to cfg-9007-alias.lvh.me:9007 2016-01-28T11:19:00.596-0500 I REPL [ReplicationExecutor] Member cfg-9007-alias.lvh.me:9007 is now in state SECONDARY 2016-01-28T11:19:01.641-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52916 #24 (17 connections now open) 2016-01-28T11:19:01.908-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52933 #25 (18 connections now open) 2016-01-28T11:19:02.055-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52938 #26 (19 connections now open) 2016-01-28T11:19:02.056-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52940 #27 (20 connections now open) 2016-01-28T11:19:02.327-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52971 #28 (21 connections now open) 2016-01-28T11:19:02.329-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52972 #29 (22 connections now open) 2016-01-28T11:19:02.355-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52975 #30 (23 connections now open) 2016-01-28T11:19:04.855-0500 I REPL [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms 2016-01-28T11:19:04.855-0500 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected 2016-01-28T11:19:04.855-0500 I REPL [ReplicationExecutor] VoteRequester: Got no vote from cfg-9007-alias.lvh.me:9007 because: candidate's data is staler than mine, resp:{ term: 1, voteGranted: false, reason: "candidate's data is staler than mine", ok: 1.0 } 2016-01-28T11:19:04.912-0500 I REPL [ReplicationExecutor] dry election run succeeded, running for election 2016-01-28T11:19:04.913-0500 I REPL [ReplicationExecutor] VoteRequester: Got no vote from cfg-9007-alias.lvh.me:9007 because: candidate's data is staler than mine, resp:{ term: 2, voteGranted: false, reason: "candidate's data is staler than mine", ok: 1.0 } 2016-01-28T11:19:04.914-0500 I REPL [ReplicationExecutor] VoteRequester: Got no vote from cfg-9009-alias.lvh.me:50015 because: already voted for another candidate this term, resp:{ term: 2, voteGranted: false, reason: "already voted for another candidate this term", ok: 1.0 } 2016-01-28T11:19:04.914-0500 I REPL [ReplicationExecutor] not becoming primary, we received insufficient votes 2016-01-28T11:19:05.605-0500 I REPL [ReplicationExecutor] syncing from: cfg-9007-alias.lvh.me:9007 2016-01-28T11:19:05.606-0500 I REPL [SyncSourceFeedback] setting syncSourceFeedback to cfg-9007-alias.lvh.me:9007 2016-01-28T11:19:05.609-0500 I ASIO [NetworkInterfaceASIO-BGSync-0] Successfully connected to cfg-9007-alias.lvh.me:9007 2016-01-28T11:19:05.611-0500 I - [rsBackgroundSync-0] Invariant failure bob src/mongo/db/repl/bgsync.cpp 639 2016-01-28T11:19:05.611-0500 I - [rsBackgroundSync-0] ***aborting after invariant() failure 2016-01-28T11:19:05.623-0500 F - [rsBackgroundSync-0] warning: log line attempted (10k) over max size (10k), printing beginning and end ... Got signal: 6 (Abort trap: 6). 0x105f7b089 0x105f7aa10 0x7fff9bf72eaa 0x7fff88dcfa36 0x7fff8c2f36e7 0x105f1fed9 0x105b46625 0x105b48ac5 0x105720ca9 0x105d982fa 0x105d9715d 0x105d97493 0x105f29d6c 0x105f2a99b 0x105f2a54d 0x105f2b4c8 0x7fff9dc33c13 0x7fff9dc33b90 0x7fff9dc31375 ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"10569D000","o":"8DE089"},{"b":"10569D000","o":"8DDA10"},{"b":"7FFF9BF6E000","o":"4EAA"},{"b":"7FFF88DCD000","o":"2A36"},{"b":"7FFF8C295000","o":"5E6E7"},{"b":"10569D000","o":"882ED9"},{"b":"10569D000","o":"4A9625"},{"b":"10569D000","o":"4ABAC5"},{"b":"10569D000","o":"83CA9"},{"b":"10569D000","o":"6FB2FA"},{"b":"10569D000","o":"6FA15D"},{"b":"10569D000","o":"6FA493"},{"b":"10569D000","o":"88CD6C"},{"b":"10569D000","o":"88D99B"},{"b":"10569D000","o":"88D54D"},{"b":"10569D000","o":"88E4C8"},{"b":"7FFF9DC30000","o":"3C13"},{"b":"7FFF9DC30000","o":"3B90"},{"b":"7FFF9DC30000","o":"1375"}],"processInfo":{ "mongodbVersion" : "3.2.1-95-g4a3c6e6", "gitVersion" : "4a3c6e62882269432e8df8c19675bde716f38d50", "compiledModules" : [], "uname" : { "sysname" : "Darwin", "release" : "15.3.0", "version" : "Darwin Kernel Version 15.3.0: Thu Dec 10 18:40:58 PST 2015; root:xnu-3248.30.4~1/RELEASE_X86_64", "machine" : "x86_64" }, "somap" : [ { "path" : "/tmp/mms-automation/test/versions/mongodb-osx-x86_64-3.2.1-95-g4a3c6e6/bin/mongod", "machType" : 2, "b" : "10569D000", "vmaddr" : "100000000", "buildId" : "1F7B2B9179413D908968E4C4D5BEDDCF" }, { "path" : "/usr/lib/libSystem.B.dylib", "machType" : 6, "b" : "7FFF9BDE0000", "vmaddr" : "7FFF93B5F000", "buildId" : "5A4257EF31453BB387A40D2404A9462D" }, { "path" : "/usr/lib/libc++.1.dylib", "machType" : 6, "b" : "7FFF9AB72000", "vmaddr" : "7FFF928F1000", "buildId" : "8FC3D139805534989AC56467CB7F4D14" }, { "path" : "/usr/lib/system/libcache.dylib", "machType" : 6, "b" : "7FFF8A07E000", "vmaddr" : "7FFF81DFD000", "buildId" : "6B245C0AF3EA383BA5425B0D0456A41B" }, { "path" : "/usr/lib/system/libcommonCrypto.dylib", "machType" : 6, "b" : "7FFF971FA000", "vmaddr" : "7FFF8EF79000", "buildId" : "766BC3F541F33315BABC72718A98EA92" }, { "path" : "/usr/lib/system/libcompiler_rt.dylib", "machType" : 6, "b" : "7FFF958D3000", "vmaddr" : "7FFF8D652000", "buildId" : "D3C4AB4023B43BC68C385B8758D14E80" }, { "path" : "/usr/lib/system/libcopyfile.dylib", "machType" : 6, "b" : "7FFF915F9000", "vmaddr" : "7FFF89378000", "buildId" : "F51332690B22388CA57C079667B6291E" }, { "path" : "/usr/lib/system/libcorecrypto.dylib", "machType" : 6, "b" : "7FFF97739000", "vmaddr" : "7FFF8F4B8000", "buildId" : "C6BD205F4ECE37EEBCABA76F39CDCFFA" }, { "path" : "/usr/lib/system/libdispatch.dylib", "machType" : 6, "b" : "7FFF973FC000", "vmaddr" : "7FFF8F17B000", "buildId" : "324C91892AF33356847F6F4CE1C6E901" }, { "path" : "/usr/lib/system/libdyld.dylib", "machType" : 6, "b" : "7FFF8EB35000", "vmaddr" : "7FFF868B4000", "buildId" : "AA629043C6F632FE8007E3478E99ACA7" }, { "path" : "/usr/lib/system/libkeymgr.dylib", "machType" : 6, "b" : "7FFF8E646000", "vmaddr" : "7FFF863C5000", "buildId" : "09397E0160663179A50C2CE666FDA929" }, { "path" : "/usr/lib/system/liblaunch.dylib", "machType" : 6, "b" : "7FFF8D80C000", "vmaddr" : "7FFF8558B000", "buildId" : "EDF719D6D2BB38DD8C944272BEFDA2CD" }, { "path" : "/usr/lib/system/libmacho.dylib", "machType" : 6, "b" : "7FFF96171000", "vmaddr" : "7FFF8DEF0000", "buildId" : "CB745E1F48853F96B38B2093DF488FD5" }, { "path" : "/us .......... em/libxpc.dylib", "machType" : 6, "b" : "7FFF9B538000", "vmaddr" : "7FFF932B7000", "buildId" : "61AB46109304354C9E9BD57198AE9866" }, { "path" : "/usr/lib/libobjc.A.dylib", "machType" : 6, "b" : "7FFF9A6A7000", "vmaddr" : "7FFF92426000", "buildId" : "9F45830DF1D53CDF94611A5477ED7D1E" }, { "path" : "/usr/lib/libauto.dylib", "machType" : 6, "b" : "7FFF9D57F000", "vmaddr" : "7FFF952FE000", "buildId" : "999E610F41FC32A3ADCA5EC049B65DFB" }, { "path" : "/usr/lib/libc++abi.dylib", "machType" : 6, "b" : "7FFF8A6FF000", "vmaddr" : "7FFF8247E000", "buildId" : "DCCC81773D0935BC97842A04FEC4C71B" }, { "path" : "/usr/lib/libDiagnosticMessagesClient.dylib", "machType" : 6, "b" : "7FFF96D06000", "vmaddr" : "7FFF8EA85000", "buildId" : "4243B6B421E9355B9C5A95A216233B96" } ] }} mongod(_ZN5mongo15printStackTraceERNSt3__113basic_ostreamIcNS0_11char_traitsIcEEEE+0x39) [0x105f7b089] mongod(_ZN5mongo12_GLOBAL__N_110abruptQuitEi+0x90) [0x105f7aa10] libsystem_platform.dylib(_sigtramp+0x1A) [0x7fff9bf72eaa] libsystem_malloc.dylib(szone_malloc_should_clear+0x445) [0x7fff88dcfa36] libsystem_c.dylib(abort+0x81) [0x7fff8c2f36e7] mongod(_ZN5mongo15invariantFailedEPKcS1_j+0x2E9) [0x105f1fed9] mongod(_ZN5mongo4repl14BackgroundSync16_fetcherCallbackERKNS_10StatusWithINS_7Fetcher13QueryResponseEEEPNS_14BSONObjBuilderERKNS_11HostAndPortENS0_6OpTimeExNSt3__16chrono8durationIxNSE_5ratioILl1ELl1000EEEEEPNS_6StatusE+0x1E75) [0x105b46625] mongod(_ZNSt3__110__function6__funcINS_6__bindIMN5mongo4repl14BackgroundSyncEFvRKNS3_10StatusWithINS3_7Fetcher13QueryResponseEEEPNS3_14BSONObjBuilderERKNS3_11HostAndPortENS4_6OpTimeExNS_6chrono8durationIxNS_5ratioILl1ELl1000EEEEEPNS3_6StatusEEJPS5_RNS_12placeholders4__phILi1EEERNST_ILi3EEENS_17reference_wrapperISF_EERSH_RxRKSM_SO_EEENS_9allocatorIS14_EEFvSB_PNS7_10NextActionESD_EEclESB_OS18_OSD_+0x55) [0x105b48ac5] mongod(_ZN5mongo7Fetcher9_callbackERKNS_8executor12TaskExecutor25RemoteCommandCallbackArgsEPKc+0x2779) [0x105720ca9] mongod(_ZNSt3__110__function6__funcIZZN5mongo8executor22ThreadPoolTaskExecutor21scheduleRemoteCommandERKNS3_20RemoteCommandRequestERKNS_8functionIFvRKNS3_12TaskExecutor25RemoteCommandCallbackArgsEEEEENK3$_2clERKNS2_10StatusWithINS3_21RemoteCommandResponseEEEEUlRKNS9_12CallbackArgsEE_NS_9allocatorISQ_EEFvSP_EEclESP_+0x15A) [0x105d982fa] mongod(_ZN5mongo8executor22ThreadPoolTaskExecutor11runCallbackENSt3__110shared_ptrINS1_13CallbackStateEEE+0x13D) [0x105d9715d] mongod(_ZNSt3__110__function6__funcIZN5mongo8executor22ThreadPoolTaskExecutor23scheduleIntoPool_inlockEPNS_4listINS_10shared_ptrINS4_13CallbackStateEEENS_9allocatorIS8_EEEERKNS_15__list_iteratorIS8_PvEESH_NS_11unique_lockINS_5mutexEEEE3$_4NS9_ISL_EEFvvEEclEv+0x33) [0x105d97493] mongod(_ZN5mongo10ThreadPool10_doOneTaskEPNSt3__111unique_lockINS1_5mutexEEE+0x25C) [0x105f29d6c] mongod(_ZN5mongo10ThreadPool13_consumeTasksEv+0x1FB) [0x105f2a99b] mongod(_ZN5mongo10ThreadPool17_workerThreadBodyEPS0_RKNSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEE+0x10D) [0x105f2a54d] mongod(_ZNSt3__114__thread_proxyINS_5tupleIJNS_6__bindIPFvPN5mongo10ThreadPoolERKNS_12basic_stringIcNS_11char_traitsIcEENS_9allocatorIcEEEEEJS5_SD_EEEEEEEEPvSI_+0x68) [0x105f2b4c8] libsystem_pthread.dylib(_pthread_body+0x83) [0x7fff9dc33c13] libsystem_pthread.dylib(_pthread_body+0x0) [0x7fff9dc33b90] libsystem_pthread.dylib(thread_start+0xD) [0x7fff9dc31375] ----- END BACKTRACE -----