2015-01-05T10:49:44.503+1100 I SHARDING [mongosMain] MongoS version 2.8.0-rc4 starting: pid=29853 port=27017 64-bit host=Pixl.local (--help for usage) 2015-01-05T10:49:44.503+1100 I CONTROL [mongosMain] db version v2.8.0-rc4 2015-01-05T10:49:44.503+1100 I CONTROL [mongosMain] git version: 3ad571742911f04b307f0071979425511c4f2570 2015-01-05T10:49:44.503+1100 I CONTROL [mongosMain] build info: Darwin mci-osx108-7.build.10gen.cc 12.5.0 Darwin Kernel Version 12.5.0: Sun Sep 29 13:33:47 PDT 2013; root:xnu-2050.48.12~1/RELEASE_X86_64 x86_64 BOOST_LIB_VERSION=1_49 2015-01-05T10:49:44.504+1100 I CONTROL [mongosMain] allocator: system 2015-01-05T10:49:44.504+1100 I CONTROL [mongosMain] options: { net: { port: 27017 }, processManagement: { fork: true }, sharding: { configDB: "Pixl.local:27024" }, systemLog: { destination: "file", logAppend: true, path: "/Users/davidhows/cases/CS-16675/data/mongos.log" } } 2015-01-05T10:49:44.543+1100 I NETWORK [mongosMain] waiting for connections on port 27017 2015-01-05T10:49:44.543+1100 I SHARDING [Balancer] about to contact config servers and shards 2015-01-05T10:49:44.546+1100 I NETWORK [Balancer] starting new replica set monitor for replica set shard01 with seeds Pixl.local:27018,Pixl.local:27019,Pixl.local:27020 2015-01-05T10:49:44.547+1100 I NETWORK [mongosMain] connection accepted from 127.0.0.1:56606 #1 (1 connection now open) 2015-01-05T10:49:44.547+1100 I NETWORK [ReplicaSetMonitorWatcher] starting 2015-01-05T10:49:44.552+1100 W NETWORK [Balancer] No primary detected for set shard01 2015-01-05T10:49:44.552+1100 W NETWORK [Balancer] No primary detected for set shard01 2015-01-05T10:49:44.554+1100 I NETWORK [mongosMain] connection accepted from 127.0.0.1:56617 #2 (2 connections now open) 2015-01-05T10:49:44.554+1100 I NETWORK [conn1] end connection 127.0.0.1:56606 (0 connections now open) 2015-01-05T10:49:44.555+1100 I NETWORK [conn2] end connection 127.0.0.1:56617 (0 connections now open) 2015-01-05T10:49:44.556+1100 W SHARDING [Balancer] could not initialize balancer, please check that all shards and config servers are up: ReplicaSetMonitor no master found for set: shard01 2015-01-05T10:49:44.556+1100 I SHARDING [Balancer] will retry to initialize balancer in one minute 2015-01-05T10:50:02.212+1100 I NETWORK [mongosMain] connection accepted from 127.0.0.1:56638 #3 (1 connection now open) 2015-01-05T10:50:14.548+1100 I ACCESS [UserCacheInvalidator] User cache generation changed from 54a9d198f1ace9d0f65ba5f4 to 54a9d1984eee6b52277ec227; invalidating user cache 2015-01-05T10:50:44.560+1100 I SHARDING [Balancer] about to contact config servers and shards 2015-01-05T10:50:44.563+1100 I NETWORK [Balancer] starting new replica set monitor for replica set shard02 with seeds Pixl.local:27021,Pixl.local:27022,Pixl.local:27023 2015-01-05T10:50:44.566+1100 I SHARDING [Balancer] config servers and shards contacted successfully 2015-01-05T10:50:44.566+1100 I SHARDING [Balancer] balancer id: Pixl.local:27017 started at Jan 5 10:50:44 2015-01-05T10:51:43.476+1100 I SHARDING [conn3] ChunkManager: time to load chunks for test.t1: 8ms sequenceNumber: 2 version: 2|5||54752fc8a021e577cfc14f11 based on: (empty) 2015-01-05T10:51:43.479+1100 I SHARDING [conn3] ChunkManager: time to load chunks for test.t2: 2ms sequenceNumber: 3 version: 13|1||5487a0872504a4716776a56c based on: (empty) 2015-01-05T10:52:39.623+1100 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 10.8.1.229:27018 2015-01-05T10:52:39.624+1100 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 10.8.1.229:27018 error: 9001 socket exception [RECV_TIMEOUT] server [10.8.1.229:27018] 2015-01-05T10:52:39.624+1100 I NETWORK [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed 2015-01-05T10:52:39.624+1100 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1420415384550207 microSec, clearing pool for Pixl.local:27018 of 0 connections 2015-01-05T10:52:39.625+1100 W NETWORK [ReplicaSetMonitorWatcher] No primary detected for set shard01 2015-01-05T10:52:49.188+1100 I SHARDING [conn3] ChunkManager: time to load chunks for test.t2: 0ms sequenceNumber: 4 version: 14|1||5487a0872504a4716776a56c based on: 13|1||5487a0872504a4716776a56c 2015-01-05T10:52:49.195+1100 I SHARDING [conn3] ChunkManager: time to load chunks for test.t1: 0ms sequenceNumber: 5 version: 2|5||54752fc8a021e577cfc14f11 based on: (empty) 2015-01-05T10:52:49.197+1100 I SHARDING [conn3] ChunkManager: time to load chunks for test.t2: 1ms sequenceNumber: 6 version: 14|1||5487a0872504a4716776a56c based on: (empty) 2015-01-05T10:52:49.200+1100 I NETWORK [conn3] PCursor erasing empty state { state: {}, retryNext: false, init: false, finish: false, errored: false } 2015-01-05T10:52:54.631+1100 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 10.8.1.229:27018 2015-01-05T10:52:54.631+1100 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 10.8.1.229:27018 error: 9001 socket exception [RECV_TIMEOUT] server [10.8.1.229:27018] 2015-01-05T10:52:54.631+1100 I NETWORK [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed 2015-01-05T10:53:09.635+1100 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 10.8.1.229:27018 2015-01-05T10:53:09.635+1100 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 10.8.1.229:27018 error: 9001 socket exception [RECV_TIMEOUT] server [10.8.1.229:27018] 2015-01-05T10:53:09.635+1100 I NETWORK [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed 2015-01-05T10:53:24.640+1100 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 10.8.1.229:27018 2015-01-05T10:53:24.640+1100 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 10.8.1.229:27018 error: 9001 socket exception [RECV_TIMEOUT] server [10.8.1.229:27018] 2015-01-05T10:53:24.640+1100 I NETWORK [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed 2015-01-05T10:53:44.552+1100 I NETWORK [PeriodicTaskRunner] Socket closed remotely, no longer connected (idle 60 secs, remote host 10.8.1.229:27018) 2015-01-05T10:54:07.882+1100 I NETWORK [conn3] end connection 127.0.0.1:56638 (0 connections now open) 2015-01-05T10:54:08.519+1100 I NETWORK [mongosMain] connection accepted from 127.0.0.1:56851 #4 (1 connection now open) 2015-01-05T10:54:39.673+1100 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 10.8.1.229:27020 2015-01-05T10:54:39.674+1100 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 10.8.1.229:27020 error: 9001 socket exception [RECV_TIMEOUT] server [10.8.1.229:27020] 2015-01-05T10:54:39.674+1100 I NETWORK [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed 2015-01-05T10:54:39.674+1100 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1420415384547913 microSec, clearing pool for Pixl.local:27020 of 0 connections 2015-01-05T10:54:46.274+1100 I SHARDING [conn4] ChunkManager: time to load chunks for test.t2: 0ms sequenceNumber: 7 version: 15|1||5487a0872504a4716776a56c based on: 14|1||5487a0872504a4716776a56c 2015-01-05T10:54:54.682+1100 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 10.8.1.229:27020 2015-01-05T10:54:54.683+1100 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 10.8.1.229:27020 error: 9001 socket exception [RECV_TIMEOUT] server [10.8.1.229:27020] 2015-01-05T10:54:54.683+1100 I NETWORK [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed 2015-01-05T10:55:09.689+1100 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 10.8.1.229:27020 2015-01-05T10:55:09.690+1100 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 10.8.1.229:27020 error: 9001 socket exception [RECV_TIMEOUT] server [10.8.1.229:27020] 2015-01-05T10:55:09.690+1100 I NETWORK [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed 2015-01-05T10:55:24.697+1100 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 10.8.1.229:27020 2015-01-05T10:55:24.697+1100 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 10.8.1.229:27020 error: 9001 socket exception [RECV_TIMEOUT] server [10.8.1.229:27020] 2015-01-05T10:55:24.698+1100 I NETWORK [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed 2015-01-05T10:55:44.555+1100 I NETWORK [PeriodicTaskRunner] Socket closed remotely, no longer connected (idle 60 secs, remote host 10.8.1.229:27020) 2015-01-05T11:10:40.215+1100 I NETWORK [conn4] end connection 127.0.0.1:56851 (0 connections now open) 2015-01-05T11:10:40.994+1100 I NETWORK [mongosMain] connection accepted from 127.0.0.1:57544 #5 (1 connection now open) 2015-01-05T11:11:42.833+1100 D SHARDING [conn5] Request::process end ns: admin.$cmd msg id: 5 op: 2004 attempt: 0 1ms 2015-01-05T11:11:42.836+1100 D SHARDING [conn5] Request::process begin ns: admin.$cmd msg id: 6 op: 2004 attempt: 0 2015-01-05T11:11:42.836+1100 D SHARDING [conn5] command: admin.$cmd { replSetGetStatus: 1.0, forShell: 1.0 } ntoreturn: -1 options: 0 2015-01-05T11:11:42.836+1100 D SHARDING [conn5] Request::process end ns: admin.$cmd msg id: 6 op: 2004 attempt: 0 0ms 2015-01-05T11:11:43.276+1100 D SHARDING [conn5] Request::process begin ns: admin.$cmd msg id: 7 op: 2004 attempt: 0 2015-01-05T11:11:43.276+1100 D SHARDING [conn5] command: admin.$cmd { query: { replSetGetStatus: 1.0, forShell: 1.0 }, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 0 2015-01-05T11:11:43.276+1100 D SHARDING [conn5] Request::process end ns: admin.$cmd msg id: 7 op: 2004 attempt: 0 0ms 2015-01-05T11:11:44.146+1100 D SHARDING [conn5] Request::process begin ns: test.t2 msg id: 8 op: 2004 attempt: 0 2015-01-05T11:11:44.146+1100 D SHARDING [conn5] query: test.t2 { query: { x: 110.0 }, $readPreference: { mode: "primaryPreferred" } } ntoreturn: 0 options: 0 2015-01-05T11:11:44.147+1100 D NETWORK [conn5] creating pcursor over QSpec { ns: "test.t2", n2skip: 0, n2return: 0, options: 0, query: { query: { x: 110.0 }, $readPreference: { mode: "primaryPreferred" } }, fields: {} } and CInfo { v_ns: "", filter: {} } 2015-01-05T11:11:44.149+1100 D QUERY [conn5] [QLOG] Beginning planning... ============================= Options = NO_TABLE_SCAN Canonical query: ns=test.t2 limit=0 skip=0 Tree: x == 110.0 Sort: {} Proj: {} ============================= 2015-01-05T11:11:44.149+1100 D QUERY [conn5] [QLOG] Index 0 is kp: { x: "hashed" } 2015-01-05T11:11:44.149+1100 D QUERY [conn5] [QLOG] Predicate over field 'x' 2015-01-05T11:11:44.149+1100 D QUERY [conn5] [QLOG] Relevant index 0 is kp: { x: "hashed" } 2015-01-05T11:11:44.150+1100 D QUERY [conn5] Relevant index 0 is kp: { x: "hashed" } 2015-01-05T11:11:44.150+1100 D QUERY [conn5] [QLOG] Rated tree: x == 110.0 || First: 0 notFirst: full path: x 2015-01-05T11:11:44.150+1100 D QUERY [conn5] [QLOG] Tagging memoID 1 2015-01-05T11:11:44.150+1100 D QUERY [conn5] [QLOG] Enumerator: memo just before moving: 2015-01-05T11:11:44.151+1100 D QUERY [conn5] [QLOG] About to build solntree from tagged tree: x == 110.0 || Selected Index #0 pos 0 2015-01-05T11:11:44.152+1100 D QUERY [conn5] [QLOG] Planner: adding solution: FETCH ---filter: x == 110.0 || Selected Index #0 pos 0 ---fetched = 1 ---sortedByDiskLoc = 1 ---getSort = [{}, ] ---Child: ------IXSCAN ---------keyPattern = { x: "hashed" } ---------direction = 1 ---------bounds = field #0['x']: [-541895413742407152, -541895413742407152] ---------fetched = 0 ---------sortedByDiskLoc = 1 ---------getSort = [{}, ] 2015-01-05T11:11:44.152+1100 D QUERY [conn5] [QLOG] Planner: outputted 1 indexed solutions. 2015-01-05T11:11:44.152+1100 D NETWORK [conn5] initializing over 1 shards required by [test.t2 @ 15|1||5487a0872504a4716776a56c] 2015-01-05T11:11:44.152+1100 D NETWORK [conn5] initializing on shard shard02:shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } 2015-01-05T11:11:44.153+1100 D NETWORK [conn5] polling for status of connection to 10.8.1.229:27021, no events 2015-01-05T11:11:44.153+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:11:44.153+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:11:44.153+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:11:44.153+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:11:44.153+1100 D NETWORK [conn5] dbclient_rs say using secondary or tagged node selection in shard02, read pref is { pref: "primary pref", tags: [ {} ] } (primary : Pixl.local:27021, lastTagged : [not cached]) 2015-01-05T11:11:44.154+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:11:44.154+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:11:44.154+1100 D NETWORK [conn5] dbclient_rs selecting primary node Pixl.local:27021 2015-01-05T11:11:44.154+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:11:44.155+1100 D NETWORK [conn5] initialized query (lazily) on shard shard02:shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023, current connection state is { state: { conn: "shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023", vinfo: "test.t2 @ 15|1||5487a0872504a4716776a56c", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2015-01-05T11:11:44.155+1100 D NETWORK [conn5] finishing over 1 shards 2015-01-05T11:11:44.155+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:11:44.155+1100 D NETWORK [conn5] finishing on shard shard02:shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023, current connection state is { state: { conn: "shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023", vinfo: "test.t2 @ 15|1||5487a0872504a4716776a56c", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2015-01-05T11:11:44.155+1100 D NETWORK [conn5] finished on shard shard02:shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023, current connection state is { state: { conn: "(done)", vinfo: "test.t2 @ 15|1||5487a0872504a4716776a56c", cursor: { _id: ObjectId('5487a25c8cb198ac46491d81'), x: 110.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } 2015-01-05T11:11:44.156+1100 D SHARDING [conn5] Request::process end ns: test.t2 msg id: 8 op: 2004 attempt: 0 9ms 2015-01-05T11:11:44.157+1100 D SHARDING [conn5] Request::process begin ns: admin.$cmd msg id: 9 op: 2004 attempt: 0 2015-01-05T11:11:44.157+1100 D SHARDING [conn5] command: admin.$cmd { query: { replSetGetStatus: 1.0, forShell: 1.0 }, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 0 2015-01-05T11:11:44.157+1100 D SHARDING [conn5] Request::process end ns: admin.$cmd msg id: 9 op: 2004 attempt: 0 0ms 2015-01-05T11:11:51.580+1100 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: shard02 2015-01-05T11:11:51.580+1100 D NETWORK [Balancer] polling for status of connection to 10.8.1.229:27024, no events 2015-01-05T11:11:51.581+1100 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set shard02 2015-01-05T11:11:51.581+1100 D NETWORK [Balancer] polling for status of connection to 10.8.1.229:27024, no events 2015-01-05T11:11:51.581+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27021, no events 2015-01-05T11:11:51.582+1100 D NETWORK [Balancer] polling for status of connection to 10.8.1.229:27024, no events 2015-01-05T11:11:51.583+1100 D SHARDING [Balancer] found 2 shards listed on config server(s): Pixl.local:27024 (10.8.1.229) 2015-01-05T11:11:51.583+1100 D SHARDING [Balancer] Refreshing MaxChunkSize: 64MB 2015-01-05T11:11:51.583+1100 D SHARDING [Balancer] skipping balancing round because balancing is disabled 2015-01-05T11:11:56.582+1100 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 10.8.1.229:27021 2015-01-05T11:11:56.583+1100 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 10.8.1.229:27021 error: 9001 socket exception [RECV_TIMEOUT] server [10.8.1.229:27021] 2015-01-05T11:11:56.583+1100 I NETWORK [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed 2015-01-05T11:11:56.583+1100 D - [ReplicaSetMonitorWatcher] User Assertion: 10276:DBClientBase::findN: transport error: Pixl.local:27021 ns: admin.$cmd query: { ismaster: 1 } 2015-01-05T11:11:56.583+1100 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1420415444565410 microSec, clearing pool for Pixl.local:27021 of 0 connections 2015-01-05T11:11:56.584+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27022, no events 2015-01-05T11:11:56.584+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27023, no events 2015-01-05T11:11:56.585+1100 W NETWORK [ReplicaSetMonitorWatcher] No primary detected for set shard02 2015-01-05T11:11:56.585+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27022 based on ismaster reply: { setName: "shard02", setVersion: 1, ismaster: false, secondary: true, hosts: [ "Pixl.local:27021", "Pixl.local:27022", "Pixl.local:27023" ], primary: "Pixl.local:27021", me: "Pixl.local:27022", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416716584), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:11:56.585+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27023 based on ismaster reply: { setName: "shard02", setVersion: 1, ismaster: false, secondary: true, hosts: [ "Pixl.local:27021", "Pixl.local:27022", "Pixl.local:27023" ], primary: "Pixl.local:27021", me: "Pixl.local:27023", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416716584), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:11:56.585+1100 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: shard01 2015-01-05T11:11:56.585+1100 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set shard01 2015-01-05T11:11:56.585+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27019, no events 2015-01-05T11:11:56.586+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27019 based on ismaster reply: { setName: "shard01", setVersion: 1, ismaster: true, secondary: false, hosts: [ "Pixl.local:27018", "Pixl.local:27019", "Pixl.local:27020" ], primary: "Pixl.local:27019", me: "Pixl.local:27019", electionId: ObjectId('54a9d2be1a886f291ae7ec51'), maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416716586), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:11:56.586+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27020, no events 2015-01-05T11:11:56.586+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27020 based on ismaster reply: { setName: "shard01", setVersion: 1, ismaster: false, secondary: true, hosts: [ "Pixl.local:27018", "Pixl.local:27019", "Pixl.local:27020" ], primary: "Pixl.local:27019", me: "Pixl.local:27020", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416716586), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:11:56.587+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27018, no events 2015-01-05T11:11:56.587+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27018 based on ismaster reply: { setName: "shard01", setVersion: 1, ismaster: false, secondary: true, hosts: [ "Pixl.local:27018", "Pixl.local:27019", "Pixl.local:27020" ], primary: "Pixl.local:27019", me: "Pixl.local:27018", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416716587), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:12:01.584+1100 D NETWORK [Balancer] polling for status of connection to 10.8.1.229:27024, no events 2015-01-05T11:12:01.585+1100 D NETWORK [Balancer] polling for status of connection to 10.8.1.229:27024, no events 2015-01-05T11:12:01.585+1100 D NETWORK [Balancer] polling for status of connection to 10.8.1.229:27024, no events 2015-01-05T11:12:01.585+1100 D SHARDING [Balancer] found 2 shards listed on config server(s): Pixl.local:27024 (10.8.1.229) 2015-01-05T11:12:01.586+1100 D SHARDING [Balancer] Refreshing MaxChunkSize: 64MB 2015-01-05T11:12:01.586+1100 D SHARDING [Balancer] skipping balancing round because balancing is disabled 2015-01-05T11:12:06.590+1100 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: shard02 2015-01-05T11:12:06.591+1100 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set shard02 2015-01-05T11:12:06.591+1100 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:Pixl.local:27021 2015-01-05T11:12:06.592+1100 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-01-05T11:12:06.592+1100 D NETWORK [ReplicaSetMonitorWatcher] connected to server Pixl.local:27021 (10.8.1.229) 2015-01-05T11:12:06.592+1100 D NETWORK [ReplicaSetMonitorWatcher] connected connection! 2015-01-05T11:12:06.592+1100 D SHARDING [ReplicaSetMonitorWatcher] checking wire version of new connection Pixl.local:27021 (10.8.1.229) 2015-01-05T11:12:11.356+1100 D SHARDING [conn5] Request::process begin ns: admin.$cmd msg id: 10 op: 2004 attempt: 0 2015-01-05T11:12:11.356+1100 D SHARDING [conn5] command: admin.$cmd { query: { replSetGetStatus: 1.0, forShell: 1.0 }, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 0 2015-01-05T11:12:11.357+1100 D SHARDING [conn5] Request::process end ns: admin.$cmd msg id: 10 op: 2004 attempt: 0 0ms 2015-01-05T11:12:11.591+1100 D NETWORK [Balancer] polling for status of connection to 10.8.1.229:27024, no events 2015-01-05T11:12:11.591+1100 D NETWORK [Balancer] polling for status of connection to 10.8.1.229:27024, no events 2015-01-05T11:12:11.592+1100 D NETWORK [Balancer] polling for status of connection to 10.8.1.229:27024, no events 2015-01-05T11:12:11.592+1100 D SHARDING [Balancer] found 2 shards listed on config server(s): Pixl.local:27024 (10.8.1.229) 2015-01-05T11:12:11.592+1100 D SHARDING [Balancer] Refreshing MaxChunkSize: 64MB 2015-01-05T11:12:11.593+1100 D SHARDING [Balancer] skipping balancing round because balancing is disabled 2015-01-05T11:12:11.593+1100 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 10.8.1.229:27021 2015-01-05T11:12:11.593+1100 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 10.8.1.229:27021 error: 9001 socket exception [RECV_TIMEOUT] server [10.8.1.229:27021] 2015-01-05T11:12:11.593+1100 I NETWORK [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed 2015-01-05T11:12:11.594+1100 D - [ReplicaSetMonitorWatcher] User Assertion: 10276:DBClientBase::findN: transport error: Pixl.local:27021 ns: admin.$cmd query: { isMaster: 1 } 2015-01-05T11:12:11.594+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27023, no events 2015-01-05T11:12:11.594+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27023 based on ismaster reply: { setName: "shard02", setVersion: 1, ismaster: true, secondary: false, hosts: [ "Pixl.local:27021", "Pixl.local:27022", "Pixl.local:27023" ], primary: "Pixl.local:27023", me: "Pixl.local:27023", electionId: ObjectId('54a9d6d2cd43a515dd0f46af'), maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416731594), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:12:11.594+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27022, no events 2015-01-05T11:12:11.595+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27022 based on ismaster reply: { setName: "shard02", setVersion: 1, ismaster: false, secondary: true, hosts: [ "Pixl.local:27021", "Pixl.local:27022", "Pixl.local:27023" ], primary: "Pixl.local:27023", me: "Pixl.local:27022", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416731595), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:12:11.595+1100 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: shard01 2015-01-05T11:12:11.595+1100 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set shard01 2015-01-05T11:12:11.595+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27019, no events 2015-01-05T11:12:11.596+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27019 based on ismaster reply: { setName: "shard01", setVersion: 1, ismaster: true, secondary: false, hosts: [ "Pixl.local:27018", "Pixl.local:27019", "Pixl.local:27020" ], primary: "Pixl.local:27019", me: "Pixl.local:27019", electionId: ObjectId('54a9d2be1a886f291ae7ec51'), maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416731596), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:12:11.596+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27020, no events 2015-01-05T11:12:11.596+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27020 based on ismaster reply: { setName: "shard01", setVersion: 1, ismaster: false, secondary: true, hosts: [ "Pixl.local:27018", "Pixl.local:27019", "Pixl.local:27020" ], primary: "Pixl.local:27019", me: "Pixl.local:27020", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416731596), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:12:11.596+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27018, no events 2015-01-05T11:12:11.597+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27018 based on ismaster reply: { setName: "shard01", setVersion: 1, ismaster: false, secondary: true, hosts: [ "Pixl.local:27018", "Pixl.local:27019", "Pixl.local:27020" ], primary: "Pixl.local:27019", me: "Pixl.local:27018", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416731597), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:12:13.121+1100 D SHARDING [conn5] Request::process begin ns: test.t2 msg id: 11 op: 2004 attempt: 0 2015-01-05T11:12:13.122+1100 D SHARDING [conn5] query: test.t2 { query: { x: 110.0 }, $readPreference: { mode: "primaryPreferred" } } ntoreturn: 0 options: 0 2015-01-05T11:12:13.122+1100 D NETWORK [conn5] creating pcursor over QSpec { ns: "test.t2", n2skip: 0, n2return: 0, options: 0, query: { query: { x: 110.0 }, $readPreference: { mode: "primaryPreferred" } }, fields: {} } and CInfo { v_ns: "", filter: {} } 2015-01-05T11:12:13.122+1100 D QUERY [conn5] [QLOG] Beginning planning... ============================= Options = NO_TABLE_SCAN Canonical query: ns=test.t2 limit=0 skip=0 Tree: x == 110.0 Sort: {} Proj: {} ============================= 2015-01-05T11:12:13.122+1100 D QUERY [conn5] [QLOG] Index 0 is kp: { x: "hashed" } 2015-01-05T11:12:13.122+1100 D QUERY [conn5] [QLOG] Predicate over field 'x' 2015-01-05T11:12:13.123+1100 D QUERY [conn5] [QLOG] Relevant index 0 is kp: { x: "hashed" } 2015-01-05T11:12:13.123+1100 D QUERY [conn5] Relevant index 0 is kp: { x: "hashed" } 2015-01-05T11:12:13.123+1100 D QUERY [conn5] [QLOG] Rated tree: x == 110.0 || First: 0 notFirst: full path: x 2015-01-05T11:12:13.123+1100 D QUERY [conn5] [QLOG] Tagging memoID 1 2015-01-05T11:12:13.123+1100 D QUERY [conn5] [QLOG] Enumerator: memo just before moving: 2015-01-05T11:12:13.124+1100 D QUERY [conn5] [QLOG] About to build solntree from tagged tree: x == 110.0 || Selected Index #0 pos 0 2015-01-05T11:12:13.124+1100 D QUERY [conn5] [QLOG] Planner: adding solution: FETCH ---filter: x == 110.0 || Selected Index #0 pos 0 ---fetched = 1 ---sortedByDiskLoc = 1 ---getSort = [{}, ] ---Child: ------IXSCAN ---------keyPattern = { x: "hashed" } ---------direction = 1 ---------bounds = field #0['x']: [-541895413742407152, -541895413742407152] ---------fetched = 0 ---------sortedByDiskLoc = 1 ---------getSort = [{}, ] 2015-01-05T11:12:13.124+1100 D QUERY [conn5] [QLOG] Planner: outputted 1 indexed solutions. 2015-01-05T11:12:13.124+1100 D NETWORK [conn5] initializing over 1 shards required by [test.t2 @ 15|1||5487a0872504a4716776a56c] 2015-01-05T11:12:13.124+1100 D NETWORK [conn5] initializing on shard shard02:shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } 2015-01-05T11:12:13.124+1100 D NETWORK [conn5] polling for status of connection to 10.8.1.229:27021, no events 2015-01-05T11:12:13.125+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:13.125+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:13.125+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:13.125+1100 D NETWORK [conn5] dbclient_rs say using secondary or tagged node selection in shard02, read pref is { pref: "primary pref", tags: [ {} ] } (primary : Pixl.local:27021, lastTagged : [not cached]) 2015-01-05T11:12:13.125+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:13.125+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:13.126+1100 D NETWORK [conn5] creating new connection to:Pixl.local:27023 2015-01-05T11:12:13.126+1100 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-01-05T11:12:13.127+1100 D NETWORK [conn5] connected to server Pixl.local:27023 (10.8.1.229) 2015-01-05T11:12:13.127+1100 D NETWORK [conn5] connected connection! 2015-01-05T11:12:13.127+1100 D NETWORK [conn5] dbclient_rs selecting primary node Pixl.local:27023 2015-01-05T11:12:13.127+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:13.127+1100 D NETWORK [conn5] initialized query (lazily) on shard shard02:shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023, current connection state is { state: { conn: "shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023", vinfo: "test.t2 @ 15|1||5487a0872504a4716776a56c", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2015-01-05T11:12:13.128+1100 D NETWORK [conn5] finishing over 1 shards 2015-01-05T11:12:13.128+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:13.128+1100 D NETWORK [conn5] finishing on shard shard02:shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023, current connection state is { state: { conn: "shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023", vinfo: "test.t2 @ 15|1||5487a0872504a4716776a56c", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2015-01-05T11:12:13.128+1100 D NETWORK [conn5] finished on shard shard02:shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023, current connection state is { state: { conn: "(done)", vinfo: "test.t2 @ 15|1||5487a0872504a4716776a56c", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } 2015-01-05T11:12:13.128+1100 D SHARDING [conn5] Request::process end ns: test.t2 msg id: 11 op: 2004 attempt: 0 6ms 2015-01-05T11:12:13.129+1100 D SHARDING [conn5] Request::process begin ns: admin.$cmd msg id: 12 op: 2004 attempt: 0 2015-01-05T11:12:13.129+1100 D SHARDING [conn5] command: admin.$cmd { query: { replSetGetStatus: 1.0, forShell: 1.0 }, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 0 2015-01-05T11:12:13.129+1100 D SHARDING [conn5] Request::process end ns: admin.$cmd msg id: 12 op: 2004 attempt: 0 0ms 2015-01-05T11:12:14.091+1100 D SHARDING [conn5] Request::process begin ns: test.t2 msg id: 13 op: 2004 attempt: 0 2015-01-05T11:12:14.091+1100 D SHARDING [conn5] query: test.t2 { query: { x: 110.0 }, $readPreference: { mode: "primaryPreferred" } } ntoreturn: 0 options: 0 2015-01-05T11:12:14.092+1100 D NETWORK [conn5] creating pcursor over QSpec { ns: "test.t2", n2skip: 0, n2return: 0, options: 0, query: { query: { x: 110.0 }, $readPreference: { mode: "primaryPreferred" } }, fields: {} } and CInfo { v_ns: "", filter: {} } 2015-01-05T11:12:14.092+1100 D QUERY [conn5] [QLOG] Beginning planning... ============================= Options = NO_TABLE_SCAN Canonical query: ns=test.t2 limit=0 skip=0 Tree: x == 110.0 Sort: {} Proj: {} ============================= 2015-01-05T11:12:14.092+1100 D QUERY [conn5] [QLOG] Index 0 is kp: { x: "hashed" } 2015-01-05T11:12:14.092+1100 D QUERY [conn5] [QLOG] Predicate over field 'x' 2015-01-05T11:12:14.092+1100 D QUERY [conn5] [QLOG] Relevant index 0 is kp: { x: "hashed" } 2015-01-05T11:12:14.092+1100 D QUERY [conn5] Relevant index 0 is kp: { x: "hashed" } 2015-01-05T11:12:14.093+1100 D QUERY [conn5] [QLOG] Rated tree: x == 110.0 || First: 0 notFirst: full path: x 2015-01-05T11:12:14.093+1100 D QUERY [conn5] [QLOG] Tagging memoID 1 2015-01-05T11:12:14.093+1100 D QUERY [conn5] [QLOG] Enumerator: memo just before moving: 2015-01-05T11:12:14.093+1100 D QUERY [conn5] [QLOG] About to build solntree from tagged tree: x == 110.0 || Selected Index #0 pos 0 2015-01-05T11:12:14.093+1100 D QUERY [conn5] [QLOG] Planner: adding solution: FETCH ---filter: x == 110.0 || Selected Index #0 pos 0 ---fetched = 1 ---sortedByDiskLoc = 1 ---getSort = [{}, ] ---Child: ------IXSCAN ---------keyPattern = { x: "hashed" } ---------direction = 1 ---------bounds = field #0['x']: [-541895413742407152, -541895413742407152] ---------fetched = 0 ---------sortedByDiskLoc = 1 ---------getSort = [{}, ] 2015-01-05T11:12:14.093+1100 D QUERY [conn5] [QLOG] Planner: outputted 1 indexed solutions. 2015-01-05T11:12:14.094+1100 D NETWORK [conn5] initializing over 1 shards required by [test.t2 @ 15|1||5487a0872504a4716776a56c] 2015-01-05T11:12:14.094+1100 D NETWORK [conn5] initializing on shard shard02:shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } 2015-01-05T11:12:14.094+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:14.094+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:14.094+1100 D SHARDING [conn5] setting shard version of 15|0||5487a0872504a4716776a56c for test.t2 on shard shard02:shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023 2015-01-05T11:12:14.094+1100 D SHARDING [conn5] last version sent with chunk manager iteration 0, current chunk manager iteration is 7 2015-01-05T11:12:14.095+1100 D SHARDING [conn5] setShardVersion shard02 Pixl.local:27023 test.t2 { setShardVersion: "test.t2", configdb: "Pixl.local:27024", shard: "shard02", shardHost: "shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023", version: Timestamp 15000|0, versionEpoch: ObjectId('5487a0872504a4716776a56c') } 7 2015-01-05T11:12:14.095+1100 D SHARDING [conn5] setShardVersion failed! { need_authoritative: true, ok: 0.0, errmsg: "first setShardVersion" } 2015-01-05T11:12:14.095+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:14.096+1100 D SHARDING [conn5] loading chunk manager for collection test.t2 using old chunk manager w/ version 15|1||5487a0872504a4716776a56c and 100 chunks 2015-01-05T11:12:14.096+1100 D SHARDING [conn5] major version query from 15|1||5487a0872504a4716776a56c and over 2 shards is { query: { ns: "test.t2", lastmod: { $gte: Timestamp 15000|1 } }, orderby: { lastmod: 1 } } 2015-01-05T11:12:14.096+1100 D SHARDING [conn5] found 3 new chunks for collection test.t2 (tracking 3), new version is 16|1||5487a0872504a4716776a56c 2015-01-05T11:12:14.097+1100 D SHARDING [conn5] loaded 3 chunks into new chunk manager for test.t2 with version 16|1||5487a0872504a4716776a56c 2015-01-05T11:12:14.097+1100 I SHARDING [conn5] ChunkManager: time to load chunks for test.t2: 1ms sequenceNumber: 8 version: 16|1||5487a0872504a4716776a56c based on: 15|1||5487a0872504a4716776a56c 2015-01-05T11:12:14.097+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:14.097+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:14.097+1100 D NETWORK [conn5] dbclient_rs say using secondary or tagged node selection in shard02, read pref is { pref: "primary pref", tags: [ {} ] } (primary : Pixl.local:27023, lastTagged : [not cached]) 2015-01-05T11:12:14.098+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:14.098+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:14.098+1100 D NETWORK [conn5] dbclient_rs selecting primary node Pixl.local:27023 2015-01-05T11:12:14.098+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:14.098+1100 D NETWORK [conn5] initialized query (lazily) on shard shard02:shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023, current connection state is { state: { conn: "shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023", vinfo: "test.t2 @ 15|1||5487a0872504a4716776a56c", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2015-01-05T11:12:14.098+1100 D NETWORK [conn5] finishing over 1 shards 2015-01-05T11:12:14.099+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard02 2015-01-05T11:12:14.099+1100 D NETWORK [conn5] finishing on shard shard02:shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023, current connection state is { state: { conn: "shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023", vinfo: "test.t2 @ 15|1||5487a0872504a4716776a56c", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2015-01-05T11:12:14.099+1100 D NETWORK [conn5] finished on shard shard02:shard02/Pixl.local:27021,Pixl.local:27022,Pixl.local:27023, current connection state is { state: { conn: "(done)", vinfo: "test.t2 @ 15|1||5487a0872504a4716776a56c", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } 2015-01-05T11:12:14.099+1100 D SHARDING [conn5] Request::process end ns: test.t2 msg id: 13 op: 2004 attempt: 0 8ms 2015-01-05T11:12:14.099+1100 D SHARDING [conn5] Request::process begin ns: admin.$cmd msg id: 14 op: 2004 attempt: 0 2015-01-05T11:12:14.100+1100 D SHARDING [conn5] command: admin.$cmd { query: { replSetGetStatus: 1.0, forShell: 1.0 }, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 0 2015-01-05T11:12:14.100+1100 D SHARDING [conn5] Request::process end ns: admin.$cmd msg id: 14 op: 2004 attempt: 0 0ms 2015-01-05T11:12:14.861+1100 D SHARDING [conn5] Request::process begin ns: test.t2 msg id: 15 op: 2004 attempt: 0 2015-01-05T11:12:14.861+1100 D SHARDING [conn5] query: test.t2 { query: { x: 110.0 }, $readPreference: { mode: "primaryPreferred" } } ntoreturn: 0 options: 0 2015-01-05T11:12:14.861+1100 D NETWORK [conn5] creating pcursor over QSpec { ns: "test.t2", n2skip: 0, n2return: 0, options: 0, query: { query: { x: 110.0 }, $readPreference: { mode: "primaryPreferred" } }, fields: {} } and CInfo { v_ns: "", filter: {} } 2015-01-05T11:12:14.862+1100 D QUERY [conn5] [QLOG] Beginning planning... ============================= Options = NO_TABLE_SCAN Canonical query: ns=test.t2 limit=0 skip=0 Tree: x == 110.0 Sort: {} Proj: {} ============================= 2015-01-05T11:12:14.862+1100 D QUERY [conn5] [QLOG] Index 0 is kp: { x: "hashed" } 2015-01-05T11:12:14.862+1100 D QUERY [conn5] [QLOG] Predicate over field 'x' 2015-01-05T11:12:14.862+1100 D QUERY [conn5] [QLOG] Relevant index 0 is kp: { x: "hashed" } 2015-01-05T11:12:14.862+1100 D QUERY [conn5] Relevant index 0 is kp: { x: "hashed" } 2015-01-05T11:12:14.863+1100 D QUERY [conn5] [QLOG] Rated tree: x == 110.0 || First: 0 notFirst: full path: x 2015-01-05T11:12:14.863+1100 D QUERY [conn5] [QLOG] Tagging memoID 1 2015-01-05T11:12:14.863+1100 D QUERY [conn5] [QLOG] Enumerator: memo just before moving: 2015-01-05T11:12:14.863+1100 D QUERY [conn5] [QLOG] About to build solntree from tagged tree: x == 110.0 || Selected Index #0 pos 0 2015-01-05T11:12:14.863+1100 D QUERY [conn5] [QLOG] Planner: adding solution: FETCH ---filter: x == 110.0 || Selected Index #0 pos 0 ---fetched = 1 ---sortedByDiskLoc = 1 ---getSort = [{}, ] ---Child: ------IXSCAN ---------keyPattern = { x: "hashed" } ---------direction = 1 ---------bounds = field #0['x']: [-541895413742407152, -541895413742407152] ---------fetched = 0 ---------sortedByDiskLoc = 1 ---------getSort = [{}, ] 2015-01-05T11:12:14.863+1100 D QUERY [conn5] [QLOG] Planner: outputted 1 indexed solutions. 2015-01-05T11:12:14.864+1100 D NETWORK [conn5] initializing over 1 shards required by [test.t2 @ 16|1||5487a0872504a4716776a56c] 2015-01-05T11:12:14.864+1100 D NETWORK [conn5] initializing on shard shard01:shard01/Pixl.local:27018,Pixl.local:27019,Pixl.local:27020, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } 2015-01-05T11:12:14.864+1100 D NETWORK [conn5] polling for status of connection to 10.8.1.229:27019, no events 2015-01-05T11:12:14.864+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard01 2015-01-05T11:12:14.864+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard01 2015-01-05T11:12:14.864+1100 D SHARDING [conn5] setting shard version of 16|0||5487a0872504a4716776a56c for test.t2 on shard shard01:shard01/Pixl.local:27018,Pixl.local:27019,Pixl.local:27020 2015-01-05T11:12:14.865+1100 D SHARDING [conn5] last version sent with chunk manager iteration 0, current chunk manager iteration is 8 2015-01-05T11:12:14.865+1100 D SHARDING [conn5] setShardVersion shard01 Pixl.local:27019 test.t2 { setShardVersion: "test.t2", configdb: "Pixl.local:27024", shard: "shard01", shardHost: "shard01/Pixl.local:27018,Pixl.local:27019,Pixl.local:27020", version: Timestamp 16000|0, versionEpoch: ObjectId('5487a0872504a4716776a56c') } 8 2015-01-05T11:12:14.868+1100 D SHARDING [conn5] saveGLEStats lastOpTime:0:0 electionId:54a9d2be1a886f291ae7ec51 2015-01-05T11:12:14.869+1100 D SHARDING [conn5] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('54a9d2be1a886f291ae7ec51') } } 2015-01-05T11:12:14.869+1100 D NETWORK [conn5] needed to set remote version on connection to value compatible with [test.t2 @ 16|1||5487a0872504a4716776a56c] 2015-01-05T11:12:14.869+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard01 2015-01-05T11:12:14.869+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard01 2015-01-05T11:12:14.870+1100 D NETWORK [conn5] dbclient_rs say using secondary or tagged node selection in shard01, read pref is { pref: "primary pref", tags: [ {} ] } (primary : Pixl.local:27019, lastTagged : [not cached]) 2015-01-05T11:12:14.870+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard01 2015-01-05T11:12:14.870+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard01 2015-01-05T11:12:14.870+1100 D NETWORK [conn5] dbclient_rs selecting primary node Pixl.local:27019 2015-01-05T11:12:14.870+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard01 2015-01-05T11:12:14.871+1100 D NETWORK [conn5] initialized query (lazily) on shard shard01:shard01/Pixl.local:27018,Pixl.local:27019,Pixl.local:27020, current connection state is { state: { conn: "shard01/Pixl.local:27018,Pixl.local:27019,Pixl.local:27020", vinfo: "test.t2 @ 16|1||5487a0872504a4716776a56c", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2015-01-05T11:12:14.871+1100 D NETWORK [conn5] finishing over 1 shards 2015-01-05T11:12:14.871+1100 D NETWORK [conn5] ReplicaSetMonitor::get shard01 2015-01-05T11:12:14.871+1100 D NETWORK [conn5] finishing on shard shard01:shard01/Pixl.local:27018,Pixl.local:27019,Pixl.local:27020, current connection state is { state: { conn: "shard01/Pixl.local:27018,Pixl.local:27019,Pixl.local:27020", vinfo: "test.t2 @ 16|1||5487a0872504a4716776a56c", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } 2015-01-05T11:12:14.871+1100 D NETWORK [conn5] finished on shard shard01:shard01/Pixl.local:27018,Pixl.local:27019,Pixl.local:27020, current connection state is { state: { conn: "(done)", vinfo: "test.t2 @ 16|1||5487a0872504a4716776a56c", cursor: { _id: ObjectId('5487a25c8cb198ac46491d81'), x: 110.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } 2015-01-05T11:12:14.872+1100 D SHARDING [conn5] Request::process end ns: test.t2 msg id: 15 op: 2004 attempt: 0 10ms 2015-01-05T11:12:14.872+1100 D SHARDING [conn5] Request::process begin ns: admin.$cmd msg id: 16 op: 2004 attempt: 0 2015-01-05T11:12:14.872+1100 D SHARDING [conn5] command: admin.$cmd { query: { replSetGetStatus: 1.0, forShell: 1.0 }, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 0 2015-01-05T11:12:14.873+1100 D SHARDING [conn5] Request::process end ns: admin.$cmd msg id: 16 op: 2004 attempt: 0 0ms 2015-01-05T11:12:21.597+1100 D NETWORK [Balancer] polling for status of connection to 10.8.1.229:27024, no events 2015-01-05T11:12:21.597+1100 D NETWORK [Balancer] polling for status of connection to 10.8.1.229:27024, no events 2015-01-05T11:12:21.598+1100 D NETWORK [Balancer] polling for status of connection to 10.8.1.229:27024, no events 2015-01-05T11:12:21.599+1100 D SHARDING [Balancer] found 2 shards listed on config server(s): Pixl.local:27024 (10.8.1.229) 2015-01-05T11:12:21.599+1100 D SHARDING [Balancer] Refreshing MaxChunkSize: 64MB 2015-01-05T11:12:21.599+1100 D SHARDING [Balancer] skipping balancing round because balancing is disabled 2015-01-05T11:12:21.600+1100 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: shard02 2015-01-05T11:12:21.601+1100 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set shard02 2015-01-05T11:12:21.601+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27023, no events 2015-01-05T11:12:21.601+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27023 based on ismaster reply: { setName: "shard02", setVersion: 1, ismaster: true, secondary: false, hosts: [ "Pixl.local:27021", "Pixl.local:27022", "Pixl.local:27023" ], primary: "Pixl.local:27023", me: "Pixl.local:27023", electionId: ObjectId('54a9d6d2cd43a515dd0f46af'), maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416741601), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:12:21.601+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27022, no events 2015-01-05T11:12:21.602+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27022 based on ismaster reply: { setName: "shard02", setVersion: 1, ismaster: false, secondary: true, hosts: [ "Pixl.local:27021", "Pixl.local:27022", "Pixl.local:27023" ], primary: "Pixl.local:27023", me: "Pixl.local:27022", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416741602), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:12:21.602+1100 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:Pixl.local:27021 2015-01-05T11:12:21.603+1100 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-01-05T11:12:21.603+1100 D NETWORK [ReplicaSetMonitorWatcher] connected to server Pixl.local:27021 (10.8.1.229) 2015-01-05T11:12:21.603+1100 D NETWORK [ReplicaSetMonitorWatcher] connected connection! 2015-01-05T11:12:21.603+1100 D SHARDING [ReplicaSetMonitorWatcher] checking wire version of new connection Pixl.local:27021 (10.8.1.229) 2015-01-05T11:12:26.604+1100 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 10.8.1.229:27021 2015-01-05T11:12:26.605+1100 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 10.8.1.229:27021 error: 9001 socket exception [RECV_TIMEOUT] server [10.8.1.229:27021] 2015-01-05T11:12:26.605+1100 I NETWORK [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed 2015-01-05T11:12:26.605+1100 D - [ReplicaSetMonitorWatcher] User Assertion: 10276:DBClientBase::findN: transport error: Pixl.local:27021 ns: admin.$cmd query: { isMaster: 1 } 2015-01-05T11:12:26.606+1100 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: shard01 2015-01-05T11:12:26.606+1100 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set shard01 2015-01-05T11:12:26.606+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27019, no events 2015-01-05T11:12:26.606+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27019 based on ismaster reply: { setName: "shard01", setVersion: 1, ismaster: true, secondary: false, hosts: [ "Pixl.local:27018", "Pixl.local:27019", "Pixl.local:27020" ], primary: "Pixl.local:27019", me: "Pixl.local:27019", electionId: ObjectId('54a9d2be1a886f291ae7ec51'), maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416746606), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:12:26.607+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27018, no events 2015-01-05T11:12:26.607+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27018 based on ismaster reply: { setName: "shard01", setVersion: 1, ismaster: false, secondary: true, hosts: [ "Pixl.local:27018", "Pixl.local:27019", "Pixl.local:27020" ], primary: "Pixl.local:27019", me: "Pixl.local:27018", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416746607), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:12:26.607+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27020, no events 2015-01-05T11:12:26.607+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27020 based on ismaster reply: { setName: "shard01", setVersion: 1, ismaster: false, secondary: true, hosts: [ "Pixl.local:27018", "Pixl.local:27019", "Pixl.local:27020" ], primary: "Pixl.local:27019", me: "Pixl.local:27020", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416746607), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:12:31.023+1100 D NETWORK polling for status of connection to 10.8.1.229:27024, no events 2015-01-05T11:12:31.030+1100 D NETWORK [PeriodicTaskRunner] polling for status of connection to 10.8.1.229:27018, no events 2015-01-05T11:12:31.030+1100 D NETWORK [PeriodicTaskRunner] polling for status of connection to 10.8.1.229:27019, no events 2015-01-05T11:12:31.031+1100 D NETWORK [PeriodicTaskRunner] polling for status of connection to 10.8.1.229:27019, no events 2015-01-05T11:12:31.031+1100 D NETWORK [PeriodicTaskRunner] polling for status of connection to 10.8.1.229:27020, no events 2015-01-05T11:12:31.031+1100 D NETWORK [PeriodicTaskRunner] polling for status of connection to 10.8.1.229:27022, no events 2015-01-05T11:12:31.031+1100 D NETWORK [PeriodicTaskRunner] polling for status of connection to 10.8.1.229:27023, no events 2015-01-05T11:12:31.032+1100 D NETWORK [PeriodicTaskRunner] polling for status of connection to 10.8.1.229:27024, no events 2015-01-05T11:12:31.032+1100 D NETWORK [PeriodicTaskRunner] polling for status of connection to 10.8.1.229:27021, no events 2015-01-05T11:12:31.032+1100 D COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 1ms 2015-01-05T11:12:31.032+1100 D NETWORK [PeriodicTaskRunner] polling for status of connection to 10.8.1.229:27024, no events 2015-01-05T11:12:31.032+1100 D NETWORK [PeriodicTaskRunner] polling for status of connection to 10.8.1.229:27019, no events 2015-01-05T11:12:31.033+1100 D NETWORK [PeriodicTaskRunner] polling for status of connection to 10.8.1.229:27023, no events 2015-01-05T11:12:31.033+1100 D COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2015-01-05T11:12:31.605+1100 D SHARDING [Balancer] found 2 shards listed on config server(s): Pixl.local:27024 (10.8.1.229) 2015-01-05T11:12:31.606+1100 D SHARDING [Balancer] Refreshing MaxChunkSize: 64MB 2015-01-05T11:12:31.606+1100 D SHARDING [Balancer] skipping balancing round because balancing is disabled 2015-01-05T11:12:32.999+1100 D SHARDING [conn5] Request::process begin ns: admin.$cmd msg id: 17 op: 2004 attempt: 0 2015-01-05T11:12:32.999+1100 D SHARDING [conn5] command: admin.$cmd { query: { setParameter: 1.0, logLevel: 0.0 }, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 0 2015-01-05T11:12:32.999+1100 D SHARDING [conn5] Request::process end ns: admin.$cmd msg id: 17 op: 2004 attempt: 0 0ms 2015-01-05T11:12:33.000+1100 D SHARDING [conn5] Request::process begin ns: admin.$cmd msg id: 18 op: 2004 attempt: 0 2015-01-05T11:12:33.000+1100 D SHARDING [conn5] command: admin.$cmd { query: { replSetGetStatus: 1.0, forShell: 1.0 }, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 0 2015-01-05T11:12:33.000+1100 D SHARDING [conn5] Request::process end ns: admin.$cmd msg id: 18 op: 2004 attempt: 0 0ms 2015-01-05T11:12:36.612+1100 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: shard02 2015-01-05T11:12:36.613+1100 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set shard02 2015-01-05T11:12:36.613+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27023, no events 2015-01-05T11:12:36.614+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27023 based on ismaster reply: { setName: "shard02", setVersion: 1, ismaster: true, secondary: false, hosts: [ "Pixl.local:27021", "Pixl.local:27022", "Pixl.local:27023" ], primary: "Pixl.local:27023", me: "Pixl.local:27023", electionId: ObjectId('54a9d6d2cd43a515dd0f46af'), maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416756613), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:12:36.614+1100 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 10.8.1.229:27022, no events 2015-01-05T11:12:36.614+1100 D NETWORK [ReplicaSetMonitorWatcher] Updating host Pixl.local:27022 based on ismaster reply: { setName: "shard02", setVersion: 1, ismaster: false, secondary: true, hosts: [ "Pixl.local:27021", "Pixl.local:27022", "Pixl.local:27023" ], primary: "Pixl.local:27023", me: "Pixl.local:27022", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 1000, localTime: new Date(1420416756614), maxWireVersion: 3, minWireVersion: 0, ok: 1.0 } 2015-01-05T11:12:36.614+1100 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:Pixl.local:27021 2015-01-05T11:12:36.615+1100 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-01-05T11:12:36.615+1100 D NETWORK [ReplicaSetMonitorWatcher] connected to server Pixl.local:27021 (10.8.1.229) 2015-01-05T11:12:36.616+1100 D NETWORK [ReplicaSetMonitorWatcher] connected connection! 2015-01-05T11:12:36.616+1100 D SHARDING [ReplicaSetMonitorWatcher] checking wire version of new connection Pixl.local:27021 (10.8.1.229) 2015-01-05T11:12:40.507+1100 D SHARDING [conn5] Request::process begin ns: admin.$cmd msg id: 19 op: 2004 attempt: 0 2015-01-05T11:12:40.507+1100 D SHARDING [conn5] command: admin.$cmd { query: { setParameter: 1.0, logLevel: 0.0 }, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 0 2015-01-05T11:12:40.507+1100 D SHARDING [conn5] Request::process end ns: admin.$cmd msg id: 19 op: 2004 attempt: 0 0ms 2015-01-05T11:12:40.507+1100 D SHARDING [conn5] Request::process begin ns: admin.$cmd msg id: 20 op: 2004 attempt: 0 2015-01-05T11:12:40.508+1100 D SHARDING [conn5] command: admin.$cmd { query: { replSetGetStatus: 1.0, forShell: 1.0 }, $readPreference: { mode: "primaryPreferred" } } ntoreturn: -1 options: 0 2015-01-05T11:12:40.508+1100 D SHARDING [conn5] Request::process end ns: admin.$cmd msg id: 20 op: 2004 attempt: 0 0ms