2016-01-28T11:16:10.570-0500 I CONTROL [initandlisten] MongoDB starting : pid=28839 port=50015 dbpath=/tmp/mms-automation/test/output/data/process9009.50015 64-bit host=neurofunk.local 2016-01-28T11:16:10.572-0500 I CONTROL [initandlisten] db version v3.2.1-95-g4a3c6e6 2016-01-28T11:16:10.572-0500 I CONTROL [initandlisten] git version: 4a3c6e62882269432e8df8c19675bde716f38d50 2016-01-28T11:16:10.572-0500 I CONTROL [initandlisten] allocator: system 2016-01-28T11:16:10.572-0500 I CONTROL [initandlisten] modules: none 2016-01-28T11:16:10.572-0500 I CONTROL [initandlisten] build environment: 2016-01-28T11:16:10.572-0500 I CONTROL [initandlisten] distarch: x86_64 2016-01-28T11:16:10.572-0500 I CONTROL [initandlisten] target_arch: x86_64 2016-01-28T11:16:10.573-0500 I CONTROL [initandlisten] options: { config: "/tmp/mms-automation/test/output/data/process9009.50015/automation-mongod.conf", net: { port: 50015 }, processManagement: { fork: true }, replication: { oplogSizeMB: 64, replSetName: "csrs" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/tmp/mms-automation/test/output/data/process9009.50015", journal: { enabled: false }, mmapv1: { preallocDataFiles: true, smallFiles: true } }, systemLog: { destination: "file", path: "/tmp/mms-automation/test/logs/run9009" } } 2016-01-28T11:16:10.573-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),,log=(enabled=false), 2016-01-28T11:16:10.765-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger 2016-01-28T11:16:10.897-0500 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument: Did not find replica set lastVote document in local.replset.election 2016-01-28T11:16:10.898-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset 2016-01-28T11:16:10.898-0500 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/tmp/mms-automation/test/output/data/process9009.50015/diagnostic.data' 2016-01-28T11:16:10.898-0500 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker 2016-01-28T11:16:11.022-0500 I NETWORK [initandlisten] waiting for connections on port 50015 2016-01-28T11:16:11.076-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50023 #1 (1 connection now open) 2016-01-28T11:16:11.140-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50026 #2 (2 connections now open) 2016-01-28T11:16:25.772-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50265 #3 (3 connections now open) 2016-01-28T11:16:26.142-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50271 #4 (4 connections now open) 2016-01-28T11:18:43.373-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52233 #5 (5 connections now open) 2016-01-28T11:18:43.374-0500 I NETWORK [conn5] end connection 127.0.0.1:52233 (4 connections now open) 2016-01-28T11:18:43.378-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52235 #6 (5 connections now open) 2016-01-28T11:18:43.381-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to cfg-9007-alias.lvh.me:9007 2016-01-28T11:18:43.383-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52240 #7 (6 connections now open) 2016-01-28T11:18:43.383-0500 I NETWORK [conn7] end connection 127.0.0.1:52240 (5 connections now open) 2016-01-28T11:18:43.447-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52242 #8 (6 connections now open) 2016-01-28T11:18:43.448-0500 I REPL [replExecDBWorker-0] Starting replication applier threads 2016-01-28T11:18:43.449-0500 W REPL [rsSync] did not receive a valid config yet 2016-01-28T11:18:43.449-0500 I REPL [ReplicationExecutor] New replica set config in use: { _id: "csrs", version: 2, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "cfg-9007-alias.lvh.me:9007", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "cfg-9008-alias.lvh.me:50016", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 0 }, { _id: 2, host: "cfg-9009-alias.lvh.me:50015", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 0 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } 2016-01-28T11:18:43.449-0500 I REPL [ReplicationExecutor] This node is cfg-9009-alias.lvh.me:50015 in the config 2016-01-28T11:18:43.449-0500 I REPL [ReplicationExecutor] transition to STARTUP2 2016-01-28T11:18:43.450-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to cfg-9008-alias.lvh.me:50016 2016-01-28T11:18:43.450-0500 I REPL [ReplicationExecutor] Member cfg-9007-alias.lvh.me:9007 is now in state PRIMARY 2016-01-28T11:18:43.451-0500 I REPL [ReplicationExecutor] Member cfg-9008-alias.lvh.me:50016 is now in state STARTUP2 2016-01-28T11:18:43.452-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to cfg-9008-alias.lvh.me:50016 2016-01-28T11:18:44.452-0500 I REPL [rsSync] ****** 2016-01-28T11:18:44.452-0500 I REPL [rsSync] creating replication oplog of size: 64MB... 2016-01-28T11:18:44.483-0500 I STORAGE [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs 2016-01-28T11:18:44.483-0500 I STORAGE [rsSync] The size storer reports that the oplog contains 0 records totaling to 0 bytes 2016-01-28T11:18:44.484-0500 I STORAGE [rsSync] Scanning the oplog to determine where to place markers for truncation 2016-01-28T11:18:44.565-0500 I REPL [rsSync] ****** 2016-01-28T11:18:44.565-0500 I REPL [rsSync] initial sync pending 2016-01-28T11:18:44.627-0500 I REPL [rsSync] no valid sync sources found in current replset to do an initial sync 2016-01-28T11:18:45.630-0500 I REPL [rsSync] initial sync pending 2016-01-28T11:18:45.631-0500 I REPL [rsSync] no valid sync sources found in current replset to do an initial sync 2016-01-28T11:18:46.634-0500 I REPL [rsSync] initial sync pending 2016-01-28T11:18:46.634-0500 I REPL [rsSync] no valid sync sources found in current replset to do an initial sync 2016-01-28T11:18:47.638-0500 I REPL [rsSync] initial sync pending 2016-01-28T11:18:47.639-0500 I REPL [rsSync] no valid sync sources found in current replset to do an initial sync 2016-01-28T11:18:48.642-0500 I REPL [rsSync] initial sync pending 2016-01-28T11:18:48.643-0500 I REPL [ReplicationExecutor] syncing from: cfg-9007-alias.lvh.me:9007 2016-01-28T11:18:48.753-0500 I REPL [rsSync] initial sync drop all databases 2016-01-28T11:18:48.753-0500 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 2016-01-28T11:18:48.753-0500 I REPL [rsSync] initial sync clone all databases 2016-01-28T11:18:48.754-0500 I REPL [rsSync] initial sync cloning db: config 2016-01-28T11:18:48.816-0500 I INDEX [rsSync] build index on: config.shards properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.shards" } 2016-01-28T11:18:48.816-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:48.824-0500 I INDEX [rsSync] build index done. scanned 2 total records. 0 secs 2016-01-28T11:18:48.890-0500 I INDEX [rsSync] build index on: config.actionlog properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.actionlog" } 2016-01-28T11:18:48.891-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:48.899-0500 I INDEX [rsSync] build index done. scanned 4 total records. 0 secs 2016-01-28T11:18:48.971-0500 I INDEX [rsSync] build index on: config.chunks properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } 2016-01-28T11:18:48.972-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:48.980-0500 I INDEX [rsSync] build index done. scanned 4 total records. 0 secs 2016-01-28T11:18:49.043-0500 I INDEX [rsSync] build index on: config.mongos properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.mongos" } 2016-01-28T11:18:49.044-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.052-0500 I INDEX [rsSync] build index done. scanned 2 total records. 0 secs 2016-01-28T11:18:49.114-0500 I INDEX [rsSync] build index on: config.collections properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.collections" } 2016-01-28T11:18:49.114-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.122-0500 I INDEX [rsSync] build index done. scanned 1 total records. 0 secs 2016-01-28T11:18:49.181-0500 I INDEX [rsSync] build index on: config.lockpings properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" } 2016-01-28T11:18:49.181-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.189-0500 I INDEX [rsSync] build index done. scanned 4 total records. 0 secs 2016-01-28T11:18:49.249-0500 I INDEX [rsSync] build index on: config.settings properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.settings" } 2016-01-28T11:18:49.249-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.257-0500 I INDEX [rsSync] build index done. scanned 2 total records. 0 secs 2016-01-28T11:18:49.321-0500 I INDEX [rsSync] build index on: config.version properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.version" } 2016-01-28T11:18:49.321-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.329-0500 I INDEX [rsSync] build index done. scanned 1 total records. 0 secs 2016-01-28T11:18:49.391-0500 I INDEX [rsSync] build index on: config.locks properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.locks" } 2016-01-28T11:18:49.391-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.399-0500 I INDEX [rsSync] build index done. scanned 4 total records. 0 secs 2016-01-28T11:18:49.460-0500 I INDEX [rsSync] build index on: config.databases properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.databases" } 2016-01-28T11:18:49.460-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.469-0500 I INDEX [rsSync] build index done. scanned 1 total records. 0 secs 2016-01-28T11:18:49.530-0500 I INDEX [rsSync] build index on: config.tags properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.tags" } 2016-01-28T11:18:49.530-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.538-0500 I INDEX [rsSync] build index done. scanned 0 total records. 0 secs 2016-01-28T11:18:49.602-0500 I INDEX [rsSync] build index on: config.changelog properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "config.changelog" } 2016-01-28T11:18:49.603-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.609-0500 I INDEX [rsSync] build index done. scanned 17 total records. 0 secs 2016-01-28T11:18:49.609-0500 I REPL [rsSync] initial sync data copy, starting syncup 2016-01-28T11:18:49.609-0500 I REPL [rsSync] oplog sync 1 of 3 2016-01-28T11:18:49.610-0500 I REPL [rsSync] oplog sync 2 of 3 2016-01-28T11:18:49.610-0500 I REPL [rsSync] initial sync building indexes 2016-01-28T11:18:49.610-0500 I REPL [rsSync] initial sync cloning indexes for : config 2016-01-28T11:18:49.612-0500 I STORAGE [rsSync] copying indexes for: { name: "shards", options: {} } 2016-01-28T11:18:49.646-0500 I INDEX [rsSync] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } 2016-01-28T11:18:49.647-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.654-0500 I INDEX [rsSync] build index done. scanned 2 total records. 0 secs 2016-01-28T11:18:49.654-0500 I STORAGE [rsSync] copying indexes for: { name: "actionlog", options: { capped: true, size: 2097152 } } 2016-01-28T11:18:49.655-0500 I STORAGE [rsSync] copying indexes for: { name: "chunks", options: {} } 2016-01-28T11:18:49.688-0500 I INDEX [rsSync] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } 2016-01-28T11:18:49.688-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.720-0500 I INDEX [rsSync] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } 2016-01-28T11:18:49.720-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.752-0500 I INDEX [rsSync] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } 2016-01-28T11:18:49.752-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.773-0500 I INDEX [rsSync] build index done. scanned 4 total records. 0 secs 2016-01-28T11:18:49.773-0500 I STORAGE [rsSync] copying indexes for: { name: "mongos", options: {} } 2016-01-28T11:18:49.774-0500 I STORAGE [rsSync] copying indexes for: { name: "collections", options: {} } 2016-01-28T11:18:49.774-0500 I STORAGE [rsSync] copying indexes for: { name: "lockpings", options: {} } 2016-01-28T11:18:49.804-0500 I INDEX [rsSync] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" } 2016-01-28T11:18:49.804-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.812-0500 I INDEX [rsSync] build index done. scanned 4 total records. 0 secs 2016-01-28T11:18:49.812-0500 I STORAGE [rsSync] copying indexes for: { name: "settings", options: {} } 2016-01-28T11:18:49.812-0500 I STORAGE [rsSync] copying indexes for: { name: "version", options: {} } 2016-01-28T11:18:49.812-0500 I STORAGE [rsSync] copying indexes for: { name: "locks", options: {} } 2016-01-28T11:18:49.842-0500 I INDEX [rsSync] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } 2016-01-28T11:18:49.842-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.873-0500 I INDEX [rsSync] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } 2016-01-28T11:18:49.873-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.886-0500 I INDEX [rsSync] build index done. scanned 4 total records. 0 secs 2016-01-28T11:18:49.887-0500 I STORAGE [rsSync] copying indexes for: { name: "databases", options: {} } 2016-01-28T11:18:49.894-0500 I STORAGE [rsSync] copying indexes for: { name: "tags", options: {} } 2016-01-28T11:18:49.921-0500 I INDEX [rsSync] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" } 2016-01-28T11:18:49.922-0500 I INDEX [rsSync] building index using bulk method 2016-01-28T11:18:49.930-0500 I INDEX [rsSync] build index done. scanned 0 total records. 0 secs 2016-01-28T11:18:49.930-0500 I STORAGE [rsSync] copying indexes for: { name: "changelog", options: { capped: true, size: 10485760 } } 2016-01-28T11:18:49.930-0500 I REPL [rsSync] oplog sync 3 of 3 2016-01-28T11:18:49.931-0500 I REPL [rsSync] initial sync finishing up 2016-01-28T11:18:49.931-0500 I REPL [rsSync] set minValid=(term: 1, timestamp: Jan 28 11:18:45:1) 2016-01-28T11:18:50.472-0500 I REPL [ReplicationExecutor] could not find member to sync from 2016-01-28T11:18:50.472-0500 W REPL [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it. 2016-01-28T11:18:50.523-0500 I REPL [rsSync] initial sync done 2016-01-28T11:18:50.527-0500 I REPL [ReplicationExecutor] transition to RECOVERING 2016-01-28T11:18:50.528-0500 I REPL [ReplicationExecutor] transition to SECONDARY 2016-01-28T11:18:53.524-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52465 #9 (7 connections now open) 2016-01-28T11:18:53.524-0500 I NETWORK [conn9] end connection 127.0.0.1:52465 (6 connections now open) 2016-01-28T11:18:53.525-0500 I NETWORK [conn6] end connection 127.0.0.1:52235 (5 connections now open) 2016-01-28T11:18:53.527-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52466 #10 (6 connections now open) 2016-01-28T11:18:53.529-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52468 #11 (7 connections now open) 2016-01-28T11:18:53.529-0500 I NETWORK [conn11] end connection 127.0.0.1:52468 (6 connections now open) 2016-01-28T11:18:53.531-0500 I REPL [ReplicationExecutor] New replica set config in use: { _id: "csrs", version: 3, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "cfg-9007-alias.lvh.me:9007", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "cfg-9008-alias.lvh.me:50016", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "cfg-9009-alias.lvh.me:50015", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } 2016-01-28T11:18:53.532-0500 I REPL [ReplicationExecutor] This node is cfg-9009-alias.lvh.me:50015 in the config 2016-01-28T11:18:53.532-0500 W REPL [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it. 2016-01-28T11:18:53.533-0500 I REPL [ReplicationExecutor] Member cfg-9008-alias.lvh.me:50016 is now in state SECONDARY 2016-01-28T11:18:54.483-0500 I REPL [ReplicationExecutor] syncing from: cfg-9007-alias.lvh.me:9007 2016-01-28T11:18:55.573-0500 I NETWORK [rsBackgroundSync] Socket recv() errno:54 Connection reset by peer 127.0.0.1:9007 2016-01-28T11:18:55.574-0500 I NETWORK [rsBackgroundSync] SocketException: remote: (NONE):0 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:9007] 2016-01-28T11:18:55.574-0500 E REPL [rsBackgroundSync] network error while attempting to run command 'isMaster' on host 'cfg-9007-alias.lvh.me:9007' 2016-01-28T11:18:55.575-0500 I REPL [ReplicationExecutor] could not find member to sync from 2016-01-28T11:18:55.576-0500 W REPL [ReplicationExecutor] The liveness timeout does not match callback handle, so not resetting it. 2016-01-28T11:18:55.576-0500 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to cfg-9007-alias.lvh.me:9007 2016-01-28T11:18:55.577-0500 I ASIO [ReplicationExecutor] after drop, pool was empty, going to spawn some connections 2016-01-28T11:18:55.581-0500 I REPL [ReplicationExecutor] Error in heartbeat request to cfg-9007-alias.lvh.me:9007; HostUnreachable: Connection refused 2016-01-28T11:18:55.584-0500 I REPL [ReplicationExecutor] Error in heartbeat request to cfg-9007-alias.lvh.me:9007; HostUnreachable: Connection refused 2016-01-28T11:18:55.589-0500 I REPL [ReplicationExecutor] Error in heartbeat request to cfg-9007-alias.lvh.me:9007; HostUnreachable: Connection refused 2016-01-28T11:18:55.787-0500 I NETWORK [conn10] end connection 127.0.0.1:52466 (5 connections now open) 2016-01-28T11:18:56.220-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52650 #12 (6 connections now open) 2016-01-28T11:18:56.221-0500 I NETWORK [conn12] end connection 127.0.0.1:52650 (5 connections now open) 2016-01-28T11:18:56.225-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52652 #13 (6 connections now open) 2016-01-28T11:18:56.380-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52660 #14 (7 connections now open) 2016-01-28T11:18:56.383-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52663 #15 (8 connections now open) 2016-01-28T11:18:56.392-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52664 #16 (9 connections now open) 2016-01-28T11:18:56.394-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52667 #17 (10 connections now open) 2016-01-28T11:18:57.640-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52705 #18 (11 connections now open) 2016-01-28T11:18:57.640-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52706 #19 (12 connections now open) 2016-01-28T11:18:57.958-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52715 #20 (13 connections now open) 2016-01-28T11:18:58.002-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52719 #21 (14 connections now open) 2016-01-28T11:18:59.672-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52798 #22 (15 connections now open) 2016-01-28T11:18:59.719-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52802 #23 (16 connections now open) 2016-01-28T11:18:59.949-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52805 #24 (17 connections now open) 2016-01-28T11:19:00.566-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52856 #25 (18 connections now open) 2016-01-28T11:19:00.595-0500 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to cfg-9007-alias.lvh.me:9007 2016-01-28T11:19:00.596-0500 I REPL [ReplicationExecutor] Member cfg-9007-alias.lvh.me:9007 is now in state SECONDARY 2016-01-28T11:19:04.841-0500 I REPL [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms 2016-01-28T11:19:04.841-0500 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected 2016-01-28T11:19:04.846-0500 I REPL [ReplicationExecutor] VoteRequester: Got no vote from cfg-9007-alias.lvh.me:9007 because: candidate's data is staler than mine, resp:{ term: 1, voteGranted: false, reason: "candidate's data is staler than mine", ok: 1.0 } 2016-01-28T11:19:04.904-0500 I REPL [ReplicationExecutor] dry election run succeeded, running for election 2016-01-28T11:19:04.913-0500 I REPL [ReplicationExecutor] VoteRequester: Got no vote from cfg-9007-alias.lvh.me:9007 because: candidate's data is staler than mine, resp:{ term: 2, voteGranted: false, reason: "candidate's data is staler than mine", ok: 1.0 } 2016-01-28T11:19:04.913-0500 I REPL [ReplicationExecutor] VoteRequester: Got no vote from cfg-9008-alias.lvh.me:50016 because: already voted for another candidate this term, resp:{ term: 2, voteGranted: false, reason: "already voted for another candidate this term", ok: 1.0 } 2016-01-28T11:19:04.913-0500 I REPL [ReplicationExecutor] not becoming primary, we received insufficient votes 2016-01-28T11:19:05.605-0500 I REPL [ReplicationExecutor] syncing from: cfg-9007-alias.lvh.me:9007 2016-01-28T11:19:05.607-0500 I REPL [SyncSourceFeedback] setting syncSourceFeedback to cfg-9007-alias.lvh.me:9007 2016-01-28T11:19:05.609-0500 I ASIO [NetworkInterfaceASIO-BGSync-0] Successfully connected to cfg-9007-alias.lvh.me:9007 2016-01-28T11:19:05.611-0500 I - [rsBackgroundSync-0] Invariant failure bob src/mongo/db/repl/bgsync.cpp 639 2016-01-28T11:19:05.611-0500 I - [rsBackgroundSync-0] ***aborting after invariant() failure 2016-01-28T11:19:05.622-0500 F - [rsBackgroundSync-0] warning: log line attempted (10k) over max size (10k), printing beginning and end ... Got signal: 6 (Abort trap: 6). 0x10236d089 0x10236ca10 0x7fff9bf72eaa 0x7fff88dcfa36 0x7fff8c2f36e7 0x102311ed9 0x101f38625 0x101f3aac5 0x101b12ca9 0x10218a2fa 0x10218915d 0x102189493 0x10231bd6c 0x10231c99b 0x10231c54d 0x10231d4c8 0x7fff9dc33c13 0x7fff9dc33b90 0x7fff9dc31375 ----- BEGIN BACKTRACE ----- {"backtrace":[{"b":"101A8F000","o":"8DE089"},{"b":"101A8F000","o":"8DDA10"},{"b":"7FFF9BF6E000","o":"4EAA"},{"b":"7FFF88DCD000","o":"2A36"},{"b":"7FFF8C295000","o":"5E6E7"},{"b":"101A8F000","o":"882ED9"},{"b":"101A8F000","o":"4A9625"},{"b":"101A8F000","o":"4ABAC5"},{"b":"101A8F000","o":"83CA9"},{"b":"101A8F000","o":"6FB2FA"},{"b":"101A8F000","o":"6FA15D"},{"b":"101A8F000","o":"6FA493"},{"b":"101A8F000","o":"88CD6C"},{"b":"101A8F000","o":"88D99B"},{"b":"101A8F000","o":"88D54D"},{"b":"101A8F000","o":"88E4C8"},{"b":"7FFF9DC30000","o":"3C13"},{"b":"7FFF9DC30000","o":"3B90"},{"b":"7FFF9DC30000","o":"1375"}],"processInfo":{ "mongodbVersion" : "3.2.1-95-g4a3c6e6", "gitVersion" : "4a3c6e62882269432e8df8c19675bde716f38d50", "compiledModules" : [], "uname" : { "sysname" : "Darwin", "release" : "15.3.0", "version" : "Darwin Kernel Version 15.3.0: Thu Dec 10 18:40:58 PST 2015; root:xnu-3248.30.4~1/RELEASE_X86_64", "machine" : "x86_64" }, "somap" : [ { "path" : "/tmp/mms-automation/test/versions/mongodb-osx-x86_64-3.2.1-95-g4a3c6e6/bin/mongod", "machType" : 2, "b" : "101A8F000", "vmaddr" : "100000000", "buildId" : "1F7B2B9179413D908968E4C4D5BEDDCF" }, { "path" : "/usr/lib/libSystem.B.dylib", "machType" : 6, "b" : "7FFF9BDE0000", "vmaddr" : "7FFF93B5F000", "buildId" : "5A4257EF31453BB387A40D2404A9462D" }, { "path" : "/usr/lib/libc++.1.dylib", "machType" : 6, "b" : "7FFF9AB72000", "vmaddr" : "7FFF928F1000", "buildId" : "8FC3D139805534989AC56467CB7F4D14" }, { "path" : "/usr/lib/system/libcache.dylib", "machType" : 6, "b" : "7FFF8A07E000", "vmaddr" : "7FFF81DFD000", "buildId" : "6B245C0AF3EA383BA5425B0D0456A41B" }, { "path" : "/usr/lib/system/libcommonCrypto.dylib", "machType" : 6, "b" : "7FFF971FA000", "vmaddr" : "7FFF8EF79000", "buildId" : "766BC3F541F33315BABC72718A98EA92" }, { "path" : "/usr/lib/system/libcompiler_rt.dylib", "machType" : 6, "b" : "7FFF958D3000", "vmaddr" : "7FFF8D652000", "buildId" : "D3C4AB4023B43BC68C385B8758D14E80" }, { "path" : "/usr/lib/system/libcopyfile.dylib", "machType" : 6, "b" : "7FFF915F9000", "vmaddr" : "7FFF89378000", "buildId" : "F51332690B22388CA57C079667B6291E" }, { "path" : "/usr/lib/system/libcorecrypto.dylib", "machType" : 6, "b" : "7FFF97739000", "vmaddr" : "7FFF8F4B8000", "buildId" : "C6BD205F4ECE37EEBCABA76F39CDCFFA" }, { "path" : "/usr/lib/system/libdispatch.dylib", "machType" : 6, "b" : "7FFF973FC000", "vmaddr" : "7FFF8F17B000", "buildId" : "324C91892AF33356847F6F4CE1C6E901" }, { "path" : "/usr/lib/system/libdyld.dylib", "machType" : 6, "b" : "7FFF8EB35000", "vmaddr" : "7FFF868B4000", "buildId" : "AA629043C6F632FE8007E3478E99ACA7" }, { "path" : "/usr/lib/system/libkeymgr.dylib", "machType" : 6, "b" : "7FFF8E646000", "vmaddr" : "7FFF863C5000", "buildId" : "09397E0160663179A50C2CE666FDA929" }, { "path" : "/usr/lib/system/liblaunch.dylib", "machType" : 6, "b" : "7FFF8D80C000", "vmaddr" : "7FFF8558B000", "buildId" : "EDF719D6D2BB38DD8C944272BEFDA2CD" }, { "path" : "/usr/lib/system/libmacho.dylib", "machType" : 6, "b" : "7FFF96171000", "vmaddr" : "7FFF8DEF0000", "buildId" : "CB745E1F48853F96B38B2093DF488FD5" }, { "path" : "/us .......... em/libxpc.dylib", "machType" : 6, "b" : "7FFF9B538000", "vmaddr" : "7FFF932B7000", "buildId" : "61AB46109304354C9E9BD57198AE9866" }, { "path" : "/usr/lib/libobjc.A.dylib", "machType" : 6, "b" : "7FFF9A6A7000", "vmaddr" : "7FFF92426000", "buildId" : "9F45830DF1D53CDF94611A5477ED7D1E" }, { "path" : "/usr/lib/libauto.dylib", "machType" : 6, "b" : "7FFF9D57F000", "vmaddr" : "7FFF952FE000", "buildId" : "999E610F41FC32A3ADCA5EC049B65DFB" }, { "path" : "/usr/lib/libc++abi.dylib", "machType" : 6, "b" : "7FFF8A6FF000", "vmaddr" : "7FFF8247E000", "buildId" : "DCCC81773D0935BC97842A04FEC4C71B" }, { "path" : "/usr/lib/libDiagnosticMessagesClient.dylib", "machType" : 6, "b" : "7FFF96D06000", "vmaddr" : "7FFF8EA85000", "buildId" : "4243B6B421E9355B9C5A95A216233B96" } ] }} mongod(_ZN5mongo15printStackTraceERNSt3__113basic_ostreamIcNS0_11char_traitsIcEEEE+0x39) [0x10236d089] mongod(_ZN5mongo12_GLOBAL__N_110abruptQuitEi+0x90) [0x10236ca10] libsystem_platform.dylib(_sigtramp+0x1A) [0x7fff9bf72eaa] libsystem_malloc.dylib(szone_malloc_should_clear+0x445) [0x7fff88dcfa36] libsystem_c.dylib(abort+0x81) [0x7fff8c2f36e7] mongod(_ZN5mongo15invariantFailedEPKcS1_j+0x2E9) [0x102311ed9] mongod(_ZN5mongo4repl14BackgroundSync16_fetcherCallbackERKNS_10StatusWithINS_7Fetcher13QueryResponseEEEPNS_14BSONObjBuilderERKNS_11HostAndPortENS0_6OpTimeExNSt3__16chrono8durationIxNSE_5ratioILl1ELl1000EEEEEPNS_6StatusE+0x1E75) [0x101f38625] mongod(_ZNSt3__110__function6__funcINS_6__bindIMN5mongo4repl14BackgroundSyncEFvRKNS3_10StatusWithINS3_7Fetcher13QueryResponseEEEPNS3_14BSONObjBuilderERKNS3_11HostAndPortENS4_6OpTimeExNS_6chrono8durationIxNS_5ratioILl1ELl1000EEEEEPNS3_6StatusEEJPS5_RNS_12placeholders4__phILi1EEERNST_ILi3EEENS_17reference_wrapperISF_EERSH_RxRKSM_SO_EEENS_9allocatorIS14_EEFvSB_PNS7_10NextActionESD_EEclESB_OS18_OSD_+0x55) [0x101f3aac5] mongod(_ZN5mongo7Fetcher9_callbackERKNS_8executor12TaskExecutor25RemoteCommandCallbackArgsEPKc+0x2779) [0x101b12ca9] mongod(_ZNSt3__110__function6__funcIZZN5mongo8executor22ThreadPoolTaskExecutor21scheduleRemoteCommandERKNS3_20RemoteCommandRequestERKNS_8functionIFvRKNS3_12TaskExecutor25RemoteCommandCallbackArgsEEEEENK3$_2clERKNS2_10StatusWithINS3_21RemoteCommandResponseEEEEUlRKNS9_12CallbackArgsEE_NS_9allocatorISQ_EEFvSP_EEclESP_+0x15A) [0x10218a2fa] mongod(_ZN5mongo8executor22ThreadPoolTaskExecutor11runCallbackENSt3__110shared_ptrINS1_13CallbackStateEEE+0x13D) [0x10218915d] mongod(_ZNSt3__110__function6__funcIZN5mongo8executor22ThreadPoolTaskExecutor23scheduleIntoPool_inlockEPNS_4listINS_10shared_ptrINS4_13CallbackStateEEENS_9allocatorIS8_EEEERKNS_15__list_iteratorIS8_PvEESH_NS_11unique_lockINS_5mutexEEEE3$_4NS9_ISL_EEFvvEEclEv+0x33) [0x102189493] mongod(_ZN5mongo10ThreadPool10_doOneTaskEPNSt3__111unique_lockINS1_5mutexEEE+0x25C) [0x10231bd6c] mongod(_ZN5mongo10ThreadPool13_consumeTasksEv+0x1FB) [0x10231c99b] mongod(_ZN5mongo10ThreadPool17_workerThreadBodyEPS0_RKNSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEE+0x10D) [0x10231c54d] mongod(_ZNSt3__114__thread_proxyINS_5tupleIJNS_6__bindIPFvPN5mongo10ThreadPoolERKNS_12basic_stringIcNS_11char_traitsIcEENS_9allocatorIcEEEEEJS5_SD_EEEEEEEEPvSI_+0x68) [0x10231d4c8] libsystem_pthread.dylib(_pthread_body+0x83) [0x7fff9dc33c13] libsystem_pthread.dylib(_pthread_body+0x0) [0x7fff9dc33b90] libsystem_pthread.dylib(thread_start+0xD) [0x7fff9dc31375] ----- END BACKTRACE -----