2015-04-01T16:20:02.049+0000 D SHARDING isInRangeTest passed 2015-04-01T16:20:02.050+0000 I CONTROL [initandlisten] MongoDB starting : pid=3712 port=27018 dbpath=D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018 64-bit host=WIN-1GHRL3D741T 2015-04-01T16:20:02.050+0000 I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2 2015-04-01T16:20:02.050+0000 I CONTROL [initandlisten] db version v3.0.0 2015-04-01T16:20:02.050+0000 I CONTROL [initandlisten] git version: a841fd6394365954886924a35076691b4d149168 modules: enterprise 2015-04-01T16:20:02.050+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1j-fips 15 Oct 2014 2015-04-01T16:20:02.050+0000 I CONTROL [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 2015-04-01T16:20:02.050+0000 I CONTROL [initandlisten] allocator: system 2015-04-01T16:20:02.050+0000 I CONTROL [initandlisten] options: { config: "d:\temp\mongo-urwm8b", net: { http: { enabled: false }, ipv6: true, port: 27018, ssl: { CAFile: "C:\test-lib\ssl-files\ca.pem", PEMKeyFile: "C:\test-lib\ssl-files\server.pem", allowInvalidCertificates: true, mode: "requireSSL", weakCertificateValidation: true } }, replication: { oplogSizeMB: 150, replSet: "repl0" }, security: { authorization: "enabled", keyFile: "D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\key" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018", engine: "mmapv1", journal: { enabled: true }, mmapv1: { nsSize: 1, preallocDataFiles: false, smallFiles: true } }, systemLog: { destination: "file", path: "D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\logs\db27018.log", verbosity: 2 } } 2015-04-01T16:20:02.050+0000 D COMMAND [SNMPAgent] BackgroundJob starting: SNMPAgent 2015-04-01T16:20:02.051+0000 D NETWORK [SNMPAgent] SNMPAgent not enabled 2015-04-01T16:20:02.063+0000 I JOURNAL [initandlisten] journal dir=D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\journal 2015-04-01T16:20:02.063+0000 D COMMAND [DataFileSync] BackgroundJob starting: DataFileSync 2015-04-01T16:20:02.066+0000 I JOURNAL [initandlisten] recover : no journal files present, no recovery needed 2015-04-01T16:20:02.169+0000 I JOURNAL [durability] Durability thread started 2015-04-01T16:20:02.170+0000 I JOURNAL [journal writer] Journal writer thread started 2015-04-01T16:20:02.175+0000 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) 2015-04-01T16:20:02.175+0000 D STORAGE [initandlisten] done repairDatabases 2015-04-01T16:20:02.175+0000 D QUERY [initandlisten] Running query: query: {} sort: {} projection: {} skip: 0 limit: 0 2015-04-01T16:20:02.175+0000 D QUERY [initandlisten] Collection admin.system.roles does not exist. Using EOF plan: query: {} sort: {} projection: {} skip: 0 limit: 0 2015-04-01T16:20:02.176+0000 I QUERY [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:0 reslen:20 locks:{} 0ms 2015-04-01T16:20:02.176+0000 D INDEX [initandlisten] checking complete 2015-04-01T16:20:02.176+0000 I INDEX [initandlisten] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\local.ns, filling with zeroes... 2015-04-01T16:20:02.184+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\local.0, filling with zeroes... 2015-04-01T16:20:02.184+0000 I STORAGE [FileAllocator] creating directory D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\_tmp 2015-04-01T16:20:02.188+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\local.0, size: 16MB, took 0.003 secs 2015-04-01T16:20:02.190+0000 D STORAGE [initandlisten] allocating new extent 2015-04-01T16:20:02.190+0000 D STORAGE [initandlisten] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:20:02.190+0000 D STORAGE [initandlisten] local.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:20:02.190+0000 D STORAGE [initandlisten] local.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:20:02.190+0000 D QUERY [initandlisten] Collection local.me does not exist. Using EOF plan: query: {} sort: {} projection: {} skip: 0 limit: 0 2015-04-01T16:20:02.190+0000 D STORAGE [initandlisten] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:20:02.190+0000 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset 2015-04-01T16:20:02.191+0000 D STORAGE [initandlisten] allocating new extent 2015-04-01T16:20:02.191+0000 D STORAGE [initandlisten] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:20:02.191+0000 D STORAGE [initandlisten] allocating new extent 2015-04-01T16:20:02.191+0000 D STORAGE [initandlisten] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:20:02.191+0000 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset 2015-04-01T16:20:02.191+0000 D QUERY [initandlisten] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:20:02.191+0000 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset 2015-04-01T16:20:02.191+0000 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } 2015-04-01T16:20:02.191+0000 D REPL [ReplExecNetThread-0] thread starting 2015-04-01T16:20:02.192+0000 D STORAGE [initandlisten] MmapV1ExtentManager::allocateExtent desiredSize:10485760 fromFreeList: 0 eloc: 0:29000 2015-04-01T16:20:02.192+0000 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner 2015-04-01T16:20:02.192+0000 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset 2015-04-01T16:20:02.192+0000 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor 2015-04-01T16:20:02.192+0000 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor 2015-04-01T16:20:02.192+0000 D STORAGE [initandlisten] allocating new extent 2015-04-01T16:20:02.192+0000 D STORAGE [initandlisten] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:a29000 2015-04-01T16:20:02.192+0000 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset 2015-04-01T16:20:02.192+0000 I NETWORK [initandlisten] waiting for connections on port 27018 ssl 2015-04-01T16:20:02.287+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62920 #1 (1 connection now open) 2015-04-01T16:20:02.287+0000 D NETWORK [conn1] SocketException: remote: 127.0.0.1:62920 error: 9001 socket exception [CLOSED] server [127.0.0.1:62920] 2015-04-01T16:20:02.287+0000 I NETWORK [conn1] end connection 127.0.0.1:62920 (0 connections now open) 2015-04-01T16:20:02.288+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62921 #2 (1 connection now open) 2015-04-01T16:20:02.290+0000 W NETWORK [conn2] no SSL certificate provided by peer 2015-04-01T16:20:02.290+0000 I ACCESS [conn2] note: no users configured in admin.system.users, allowing localhost access 2015-04-01T16:20:02.290+0000 D COMMAND [conn2] run command admin.$cmd { ismaster: 1 } 2015-04-01T16:20:02.290+0000 I COMMAND [conn2] command admin.$cmd command: isMaster { ismaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:256 locks:{} 0ms 2015-04-01T16:20:02.291+0000 D COMMAND [conn2] run command admin.$cmd { isMaster: null } 2015-04-01T16:20:02.291+0000 I COMMAND [conn2] command admin.$cmd command: isMaster { isMaster: null } keyUpdates:0 writeConflicts:0 numYields:0 reslen:256 locks:{} 0ms 2015-04-01T16:20:02.291+0000 D NETWORK [conn2] SocketException: remote: 127.0.0.1:62921 error: 9001 socket exception [CLOSED] server [127.0.0.1:62921] 2015-04-01T16:20:02.291+0000 I NETWORK [conn2] end connection 127.0.0.1:62921 (0 connections now open) 2015-04-01T16:20:02.292+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62922 #3 (1 connection now open) 2015-04-01T16:20:02.294+0000 W NETWORK [conn3] no SSL certificate provided by peer 2015-04-01T16:20:02.294+0000 D COMMAND [conn3] run command admin.$cmd { ismaster: 1 } 2015-04-01T16:20:02.294+0000 I COMMAND [conn3] command admin.$cmd command: isMaster { ismaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:256 locks:{} 0ms 2015-04-01T16:20:02.295+0000 D COMMAND [conn3] run command admin.$cmd { buildinfo: 1 } 2015-04-01T16:20:02.295+0000 I COMMAND [conn3] command admin.$cmd command: buildInfo { buildinfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:20:02.296+0000 D NETWORK [conn3] SocketException: remote: 127.0.0.1:62922 error: 9001 socket exception [CLOSED] server [127.0.0.1:62922] 2015-04-01T16:20:02.296+0000 I NETWORK [conn3] end connection 127.0.0.1:62922 (0 connections now open) 2015-04-01T16:20:02.815+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62927 #4 (1 connection now open) 2015-04-01T16:20:02.817+0000 W NETWORK [conn4] no SSL certificate provided by peer 2015-04-01T16:20:02.817+0000 D COMMAND [conn4] run command admin.$cmd { ismaster: 1 } 2015-04-01T16:20:02.817+0000 I COMMAND [conn4] command admin.$cmd command: isMaster { ismaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:256 locks:{} 0ms 2015-04-01T16:20:02.817+0000 D COMMAND [conn4] run command admin.$cmd { getnonce: 1 } 2015-04-01T16:20:02.817+0000 I COMMAND [conn4] command admin.$cmd command: getnonce { getnonce: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:65 locks:{} 0ms 2015-04-01T16:20:02.818+0000 D COMMAND [conn4] run command admin.$cmd { authenticate: 1, user: "bob", nonce: "xxx", key: "xxx" } 2015-04-01T16:20:02.818+0000 I ACCESS [conn4] authenticate db: admin { authenticate: 1, user: "bob", nonce: "xxx", key: "xxx" } 2015-04-01T16:20:02.818+0000 I ACCESS [conn4] Failed to authenticate bob@admin with mechanism MONGODB-CR: AuthenticationFailed UserNotFound Could not find user bob@admin 2015-04-01T16:20:02.818+0000 I COMMAND [conn4] command admin.$cmd command: authenticate { authenticate: 1, user: "bob", nonce: "xxx", key: "xxx" } keyUpdates:0 writeConflicts:0 numYields:0 reslen:71 locks:{} 0ms 2015-04-01T16:20:02.818+0000 D COMMAND [conn4] run command admin.$cmd { buildinfo: 1 } 2015-04-01T16:20:02.818+0000 I COMMAND [conn4] command admin.$cmd command: buildInfo { buildinfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:20:02.819+0000 D NETWORK [conn4] SocketException: remote: 127.0.0.1:62927 error: 9001 socket exception [CLOSED] server [127.0.0.1:62927] 2015-04-01T16:20:02.819+0000 I NETWORK [conn4] end connection 127.0.0.1:62927 (0 connections now open) 2015-04-01T16:20:02.831+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62930 #5 (1 connection now open) 2015-04-01T16:20:02.835+0000 D COMMAND [conn5] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D436334386D545656475278476C654E6C6E4134415654376C6F714B765A376476) } 2015-04-01T16:20:02.836+0000 I COMMAND [conn5] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D436334386D545656475278476C654E6C6E4134415654376C6F714B765A376476) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:20:02.852+0000 D COMMAND [conn5] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D436334386D545656475278476C654E6C6E4134415654376C6F714B765A376476336A6441695945464A5A72616D6B3468784F634D492F703938644B6746...), conversationId: 1 } 2015-04-01T16:20:02.852+0000 I COMMAND [conn5] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D436334386D545656475278476C654E6C6E4134415654376C6F714B765A376476336A6441695945464A5A72616D6B3468784F634D492F703938644B6746...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:20:02.852+0000 D COMMAND [conn5] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:20:02.852+0000 I ACCESS [conn5] Successfully authenticated as principal __system on local 2015-04-01T16:20:02.852+0000 I COMMAND [conn5] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:20:02.852+0000 D COMMAND [conn5] run command admin.$cmd { _isSelf: 1 } 2015-04-01T16:20:02.852+0000 I COMMAND [conn5] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:53 locks:{} 0ms 2015-04-01T16:20:02.852+0000 D NETWORK [conn5] SocketException: remote: 127.0.0.1:62930 error: 9001 socket exception [CLOSED] server [127.0.0.1:62930] 2015-04-01T16:20:02.852+0000 I NETWORK [conn5] end connection 127.0.0.1:62930 (0 connections now open) 2015-04-01T16:20:02.872+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62932 #6 (1 connection now open) 2015-04-01T16:20:02.877+0000 D COMMAND [conn6] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D747A366743582B325536675469686F473130532F3655466C46554B5431517951) } 2015-04-01T16:20:02.877+0000 I COMMAND [conn6] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D747A366743582B325536675469686F473130532F3655466C46554B5431517951) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:20:02.907+0000 D COMMAND [conn6] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D747A366743582B325536675469686F473130532F3655466C46554B5431517951686B31644C4E2F6D70794B5834333852484D576D555438445752496D78...), conversationId: 1 } 2015-04-01T16:20:02.907+0000 I COMMAND [conn6] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D747A366743582B325536675469686F473130532F3655466C46554B5431517951686B31644C4E2F6D70794B5834333852484D576D555438445752496D78...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:20:02.907+0000 D COMMAND [conn6] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:20:02.907+0000 I ACCESS [conn6] Successfully authenticated as principal __system on local 2015-04-01T16:20:02.907+0000 I COMMAND [conn6] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:20:02.908+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: true } 2015-04-01T16:20:02.908+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: true } 2015-04-01T16:20:02.908+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:02.908Z 2015-04-01T16:20:02.908+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:02.908+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: true } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:112 locks:{} 0ms 2015-04-01T16:20:02.911+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:02.911+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:02.911+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:20:02.912+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:102 locks:{} 0ms 2015-04-01T16:20:02.913+0000 D NETWORK [ReplExecNetThread-0] connected to server localhost:27017 (127.0.0.1) 2015-04-01T16:20:02.917+0000 W NETWORK [ReplExecNetThread-0] The server certificate does not match the host name localhost 2015-04-01T16:20:03.042+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:03.042+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:05.042Z 2015-04-01T16:20:03.042+0000 D REPL [ReplicationExecutor] Received new config via heartbeat with version 1 2015-04-01T16:20:03.044+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:20:03.048+0000 D NETWORK connected to server localhost:27017 (127.0.0.1) 2015-04-01T16:20:03.052+0000 W NETWORK The server certificate does not match the host name localhost 2015-04-01T16:20:03.080+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62939 #7 (2 connections now open) 2015-04-01T16:20:03.084+0000 D COMMAND [conn7] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4C3834473862752F485A317839536C4C3134783756444C42556970345A4C4A68) } 2015-04-01T16:20:03.084+0000 I COMMAND [conn7] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4C3834473862752F485A317839536C4C3134783756444C42556970345A4C4A68) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:20:03.085+0000 D NETWORK getBoundAddrs(): [ 10.237.207.49] [ ::1] [ 127.0.0.1] [ fe80::5efe:10.237.207.49] [ 2001:0:9d38:6abd:209e:dc1:c958:91ea] [ fe80::209e:dc1:c958:91ea] 2015-04-01T16:20:03.087+0000 D NETWORK getAddrsForHost("localhost:27018"): [ ::1] [ 127.0.0.1] 2015-04-01T16:20:03.087+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:20:03.087+0000 D NETWORK connected to server localhost:27019 (127.0.0.1) 2015-04-01T16:20:03.091+0000 W NETWORK The server certificate does not match the host name localhost 2015-04-01T16:20:03.113+0000 D COMMAND [conn7] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D4C3834473862752F485A317839536C4C3134783756444C42556970345A4C4A686D4D52714D6663727337454E577963394B5A6D5637636E712B71587473...), conversationId: 1 } 2015-04-01T16:20:03.113+0000 I COMMAND [conn7] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4C3834473862752F485A317839536C4C3134783756444C42556970345A4C4A686D4D52714D6663727337454E577963394B5A6D5637636E712B71587473...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:20:03.113+0000 D COMMAND [conn7] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:20:03.114+0000 I ACCESS [conn7] Successfully authenticated as principal __system on local 2015-04-01T16:20:03.114+0000 I COMMAND [conn7] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:20:03.114+0000 D COMMAND [conn7] run command admin.$cmd { _isSelf: 1 } 2015-04-01T16:20:03.114+0000 I COMMAND [conn7] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:53 locks:{} 0ms 2015-04-01T16:20:03.114+0000 D NETWORK [conn7] SocketException: remote: 127.0.0.1:62939 error: 9001 socket exception [CLOSED] server [127.0.0.1:62939] 2015-04-01T16:20:03.114+0000 I NETWORK [conn7] end connection 127.0.0.1:62939 (1 connection now open) 2015-04-01T16:20:03.121+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62941 #8 (2 connections now open) 2015-04-01T16:20:03.124+0000 D COMMAND [conn8] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D6B5150664B2F58313553736448497665722B5256344F664F642B4D44662B4330) } 2015-04-01T16:20:03.124+0000 I COMMAND [conn8] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D6B5150664B2F58313553736448497665722B5256344F664F642B4D44662B4330) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:20:03.141+0000 D COMMAND [conn8] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D6B5150664B2F58313553736448497665722B5256344F664F642B4D44662B43304A4F6F546A673559614D706B422F6263744B32446B6662676D55445173...), conversationId: 1 } 2015-04-01T16:20:03.141+0000 I COMMAND [conn8] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D6B5150664B2F58313553736448497665722B5256344F664F642B4D44662B43304A4F6F546A673559614D706B422F6263744B32446B6662676D55445173...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:20:03.141+0000 D COMMAND [conn8] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:20:03.141+0000 I ACCESS [conn8] Successfully authenticated as principal __system on local 2015-04-01T16:20:03.141+0000 I COMMAND [conn8] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:20:03.141+0000 D COMMAND [conn8] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:03.141+0000 D COMMAND [conn8] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:03.141+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:03.141Z 2015-04-01T16:20:03.141+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:03.142+0000 I COMMAND [conn8] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:102 locks:{} 0ms 2015-04-01T16:20:03.142+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:20:03.142+0000 D NETWORK [ReplExecNetThread-0] connected to server localhost:27019 (127.0.0.1) 2015-04-01T16:20:03.144+0000 W NETWORK [ReplExecNetThread-0] The server certificate does not match the host name localhost 2015-04-01T16:20:03.214+0000 D STORAGE [WriteReplSetConfig] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:a49000 2015-04-01T16:20:03.215+0000 D STORAGE [WriteReplSetConfig] local.system.replset: clearing plan cache - collection info cache reset 2015-04-01T16:20:03.215+0000 D STORAGE [WriteReplSetConfig] allocating new extent 2015-04-01T16:20:03.215+0000 D STORAGE [WriteReplSetConfig] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:a4b000 2015-04-01T16:20:03.215+0000 D STORAGE [WriteReplSetConfig] local.system.replset: clearing plan cache - collection info cache reset 2015-04-01T16:20:03.215+0000 D QUERY [WriteReplSetConfig] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:20:03.215+0000 I REPL [WriteReplSetConfig] Starting replication applier threads 2015-04-01T16:20:03.216+0000 I REPL [ReplicationExecutor] New replica set config in use: { _id: "repl0", version: 1, members: [ { _id: 0, host: "localhost:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 99.0, tags: { ordinal: "one", dc: "ny" }, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:27019", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } 2015-04-01T16:20:03.216+0000 I REPL [ReplicationExecutor] This node is localhost:27018 in the config 2015-04-01T16:20:03.216+0000 I REPL [ReplicationExecutor] transition to STARTUP2 2015-04-01T16:20:03.216+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:03.216Z 2015-04-01T16:20:03.216+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:03.216Z 2015-04-01T16:20:03.216+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:03.217+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:03.217+0000 D REPL [ReplExecNetThread-1] thread starting 2015-04-01T16:20:03.218+0000 D REPL [ReplExecNetThread-1] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:03.218+0000 I REPL [ReplicationExecutor] Member localhost:27017 is now in state SECONDARY 2015-04-01T16:20:03.218+0000 D REPL [ReplicationExecutor] Not standing for election because node has no applied oplog entries; member is not currently a secondary; member is more than 10 seconds behind the most up-to-date member (mask 0x4A); my last optime is 0:0 and the newest is 551c1ab3:1 2015-04-01T16:20:03.218+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:05.218Z 2015-04-01T16:20:03.219+0000 I REPL [rsSync] ****** 2015-04-01T16:20:03.219+0000 I REPL [rsSync] creating replication oplog of size: 150MB... 2015-04-01T16:20:03.219+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\local.1, filling with zeroes... 2015-04-01T16:20:03.219+0000 D REPL [ReplExecNetThread-2] thread starting 2015-04-01T16:20:03.219+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:20:03.220+0000 D NETWORK [ReplExecNetThread-1] connected to server localhost:27019 (127.0.0.1) 2015-04-01T16:20:03.223+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\local.1, size: 256MB, took 0.003 secs 2015-04-01T16:20:03.224+0000 D STORAGE [rsSync] MmapV1ExtentManager::allocateExtent desiredSize:157286400 fromFreeList: 0 eloc: 1:2000 2015-04-01T16:20:03.224+0000 D STORAGE [rsSync] local.oplog.rs: clearing plan cache - collection info cache reset 2015-04-01T16:20:03.225+0000 W NETWORK [ReplExecNetThread-1] The server certificate does not match the host name localhost 2015-04-01T16:20:03.251+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:03.255+0000 D REPL [ReplExecNetThread-1] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:03.255+0000 I REPL [ReplicationExecutor] Member localhost:27019 is now in state STARTUP2 2015-04-01T16:20:03.255+0000 D REPL [ReplicationExecutor] Not standing for election because node has no applied oplog entries; member is not currently a secondary; member is more than 10 seconds behind the most up-to-date member (mask 0x4A); my last optime is 0:0 and the newest is 551c1ab3:1 2015-04-01T16:20:03.255+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:05.255Z 2015-04-01T16:20:03.351+0000 I REPL [rsSync] ****** 2015-04-01T16:20:03.351+0000 I REPL [rsSync] initial sync pending 2015-04-01T16:20:03.352+0000 D STORAGE [rsSync] local.oplog.rs: clearing plan cache - collection info cache reset 2015-04-01T16:20:03.352+0000 I REPL [rsSync] no valid sync sources found in current replset to do an initial sync 2015-04-01T16:20:03.352+0000 D JOURNAL [journal writer] lsn set 898 2015-04-01T16:20:04.353+0000 I REPL [rsSync] initial sync pending 2015-04-01T16:20:04.353+0000 D STORAGE [rsSync] local.oplog.rs: clearing plan cache - collection info cache reset 2015-04-01T16:20:04.353+0000 I REPL [rsSync] no valid sync sources found in current replset to do an initial sync 2015-04-01T16:20:04.912+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:04.912+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:04.912+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:117 locks:{} 0ms 2015-04-01T16:20:04.913+0000 D COMMAND [conn6] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806148673241089), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:20:04.913+0000 D COMMAND [conn6] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806148673241089), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:20:04.913+0000 I COMMAND [conn6] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806148673241089), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:20:04.914+0000 D COMMAND [conn6] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1ab4b5355f778169cfeb') } 2015-04-01T16:20:04.914+0000 D COMMAND [conn6] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1ab4b5355f778169cfeb') } 2015-04-01T16:20:04.914+0000 D COMMAND [conn6] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1ab4b5355f778169cfeb') } 2015-04-01T16:20:04.914+0000 I REPL [ReplicationExecutor] replSetElect voting yea for localhost:27017 (0) 2015-04-01T16:20:04.914+0000 I COMMAND [conn6] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1ab4b5355f778169cfeb') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:20:05.142+0000 D COMMAND [conn8] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:05.142+0000 D COMMAND [conn8] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:05.142+0000 I COMMAND [conn8] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:117 locks:{} 0ms 2015-04-01T16:20:05.218+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:05.218+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:05.218+0000 I REPL [ReplicationExecutor] Member localhost:27017 is now in state PRIMARY 2015-04-01T16:20:05.219+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:07.218Z 2015-04-01T16:20:05.256+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:05.256+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:05.256+0000 I REPL [ReplicationExecutor] Member localhost:27019 is now in state SECONDARY 2015-04-01T16:20:05.256+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:07.256Z 2015-04-01T16:20:05.354+0000 I REPL [rsSync] initial sync pending 2015-04-01T16:20:05.354+0000 D STORAGE [rsSync] local.oplog.rs: clearing plan cache - collection info cache reset 2015-04-01T16:20:05.354+0000 I REPL [ReplicationExecutor] syncing from: localhost:27019 2015-04-01T16:20:05.354+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:20:05.354+0000 D NETWORK [rsSync] connected to server localhost:27019 (127.0.0.1) 2015-04-01T16:20:05.357+0000 W NETWORK [rsSync] The server certificate does not match the host name localhost 2015-04-01T16:20:05.375+0000 D STORAGE [rsSync] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 1:9602000 2015-04-01T16:20:05.376+0000 D STORAGE [rsSync] local.replset.minvalid: clearing plan cache - collection info cache reset 2015-04-01T16:20:05.376+0000 D STORAGE [rsSync] allocating new extent 2015-04-01T16:20:05.376+0000 D STORAGE [rsSync] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 1:9604000 2015-04-01T16:20:05.376+0000 D STORAGE [rsSync] local.replset.minvalid: clearing plan cache - collection info cache reset 2015-04-01T16:20:05.376+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:20:05.376+0000 I REPL [rsSync] initial sync drop all databases 2015-04-01T16:20:05.376+0000 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 2015-04-01T16:20:05.376+0000 I REPL [rsSync] initial sync clone all databases 2015-04-01T16:20:05.377+0000 I REPL [rsSync] initial sync data copy, starting syncup 2015-04-01T16:20:05.377+0000 I REPL [rsSync] oplog sync 1 of 3 2015-04-01T16:20:05.377+0000 I REPL [rsSync] oplog sync 2 of 3 2015-04-01T16:20:05.377+0000 I REPL [rsSync] initial sync building indexes 2015-04-01T16:20:05.377+0000 I REPL [rsSync] oplog sync 3 of 3 2015-04-01T16:20:05.379+0000 D QUERY [rsSync] Running query: query: {} sort: {} projection: {} skip: 0 limit: 0 2015-04-01T16:20:05.379+0000 D QUERY [rsSync] Collection admin.system.roles does not exist. Using EOF plan: query: {} sort: {} projection: {} skip: 0 limit: 0 2015-04-01T16:20:05.379+0000 I QUERY [rsSync] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:0 reslen:20 locks:{} 0ms 2015-04-01T16:20:05.379+0000 I REPL [rsSync] initial sync finishing up 2015-04-01T16:20:05.379+0000 I REPL [rsSync] replSet set minValid=551c1ab3:1 2015-04-01T16:20:05.379+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:20:05.379+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:20:05.379+0000 I REPL [rsSync] initial sync done 2015-04-01T16:20:05.383+0000 I REPL [ReplicationExecutor] transition to RECOVERING 2015-04-01T16:20:05.384+0000 I REPL [ReplicationExecutor] transition to SECONDARY 2015-04-01T16:20:06.219+0000 D REPL [rsBackgroundSync] replset bgsync fetch queue set to: 551c1ab3:1 0 2015-04-01T16:20:06.219+0000 I REPL [ReplicationExecutor] could not find member to sync from 2015-04-01T16:20:06.913+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:06.913+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:06.913+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:117 locks:{} 0ms 2015-04-01T16:20:07.142+0000 D COMMAND [conn8] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:07.142+0000 D COMMAND [conn8] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:07.142+0000 I COMMAND [conn8] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:117 locks:{} 0ms 2015-04-01T16:20:07.218+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:07.218+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:20:07.218+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:07.218+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:09.218Z 2015-04-01T16:20:07.256+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:07.256+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:07.256+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:09.256Z 2015-04-01T16:20:08.913+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:08.913+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:08.913+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:117 locks:{} 0ms 2015-04-01T16:20:09.142+0000 D COMMAND [conn8] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:09.142+0000 D COMMAND [conn8] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:09.142+0000 I COMMAND [conn8] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:117 locks:{} 0ms 2015-04-01T16:20:09.218+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:09.218+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:09.218+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:11.218Z 2015-04-01T16:20:09.256+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:09.256+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:20:09.256+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:09.256+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:11.256Z 2015-04-01T16:20:10.913+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:10.913+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:10.913+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:117 locks:{} 0ms 2015-04-01T16:20:11.142+0000 D COMMAND [conn8] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:11.142+0000 D COMMAND [conn8] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:11.142+0000 I COMMAND [conn8] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:117 locks:{} 0ms 2015-04-01T16:20:11.218+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:11.218+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:11.218+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:13.218Z 2015-04-01T16:20:11.256+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:11.256+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:11.256+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:13.256Z 2015-04-01T16:20:12.913+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:12.913+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:12.913+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:117 locks:{} 0ms 2015-04-01T16:20:13.059+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62949 #9 (3 connections now open) 2015-04-01T16:20:13.061+0000 W NETWORK [conn9] no SSL certificate provided by peer 2015-04-01T16:20:13.061+0000 D COMMAND [conn9] run command admin.$cmd { ismaster: 1 } 2015-04-01T16:20:13.061+0000 I COMMAND [conn9] command admin.$cmd command: isMaster { ismaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:20:13.065+0000 D NETWORK [conn9] SocketException: remote: 127.0.0.1:62949 error: 9001 socket exception [CLOSED] server [127.0.0.1:62949] 2015-04-01T16:20:13.065+0000 I NETWORK [conn9] end connection 127.0.0.1:62949 (2 connections now open) 2015-04-01T16:20:13.070+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62952 #10 (3 connections now open) 2015-04-01T16:20:13.071+0000 W NETWORK [conn10] no SSL certificate provided by peer 2015-04-01T16:20:13.071+0000 D COMMAND [conn10] run command admin.$cmd { ismaster: 1 } 2015-04-01T16:20:13.071+0000 I COMMAND [conn10] command admin.$cmd command: isMaster { ismaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:20:13.109+0000 D NETWORK [conn10] SocketException: remote: 127.0.0.1:62952 error: 9001 socket exception [CLOSED] server [127.0.0.1:62952] 2015-04-01T16:20:13.109+0000 I NETWORK [conn10] end connection 127.0.0.1:62952 (2 connections now open) 2015-04-01T16:20:13.142+0000 D COMMAND [conn8] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:13.142+0000 D COMMAND [conn8] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:13.142+0000 I COMMAND [conn8] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:117 locks:{} 0ms 2015-04-01T16:20:13.218+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:13.218+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:20:13.219+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:13.219+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:15.219Z 2015-04-01T16:20:13.221+0000 I REPL [ReplicationExecutor] syncing from: localhost:27017 2015-04-01T16:20:13.221+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:20:13.221+0000 D NETWORK [rsBackgroundSync] connected to server localhost:27017 (127.0.0.1) 2015-04-01T16:20:13.223+0000 W NETWORK [rsBackgroundSync] The server certificate does not match the host name localhost 2015-04-01T16:20:13.241+0000 D REPL [rsBackgroundSync] repl: local.oplog.rs.find({ ts: { $gte: Timestamp 1427905203000|1 } }) 2015-04-01T16:20:13.241+0000 D REPL [SyncSourceFeedback] resetting connection in sync source feedback 2015-04-01T16:20:13.241+0000 I REPL [SyncSourceFeedback] replset setting syncSourceFeedback to localhost:27017 2015-04-01T16:20:13.241+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:20:13.241+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:20:13.242+0000 D NETWORK [SyncSourceFeedback] connected to server localhost:27017 (127.0.0.1) 2015-04-01T16:20:13.242+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:20:13.242+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\admin.ns, filling with zeroes... 2015-04-01T16:20:13.246+0000 W NETWORK [SyncSourceFeedback] The server certificate does not match the host name localhost 2015-04-01T16:20:13.255+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\admin.0, filling with zeroes... 2015-04-01T16:20:13.257+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:13.257+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:13.257+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:15.257Z 2015-04-01T16:20:13.258+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\admin.0, size: 16MB, took 0.003 secs 2015-04-01T16:20:13.266+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:20:13.266+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:20:13.266+0000 D STORAGE [repl writer worker 15] admin.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:20:13.267+0000 D STORAGE [repl writer worker 15] admin.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:20:13.267+0000 D COMMAND [repl writer worker 15] run command admin.$cmd { create: "system.version" } 2015-04-01T16:20:13.267+0000 D STORAGE [repl writer worker 15] create collection admin.system.version {} 2015-04-01T16:20:13.267+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:20:13.267+0000 D STORAGE [repl writer worker 15] admin.system.version: clearing plan cache - collection info cache reset 2015-04-01T16:20:13.267+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:20:13.267+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:20:13.267+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:20:13.267+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:20:13.267+0000 D STORAGE [repl writer worker 15] admin.system.version: clearing plan cache - collection info cache reset 2015-04-01T16:20:13.268+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:20:13.268+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:20:13.268+0000 D QUERY [repl writer worker 15] Using idhack: { _id: "authSchema" } 2015-04-01T16:20:13.268+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:20:13.269+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:20:13.269+0000 D COMMAND [repl writer worker 15] run command admin.$cmd { create: "system.users" } 2015-04-01T16:20:13.269+0000 D STORAGE [repl writer worker 15] create collection admin.system.users {} 2015-04-01T16:20:13.269+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:29000 2015-04-01T16:20:13.269+0000 D STORAGE [repl writer worker 15] admin.system.users: clearing plan cache - collection info cache reset 2015-04-01T16:20:13.269+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:20:13.269+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:2b000 2015-04-01T16:20:13.269+0000 D STORAGE [repl writer worker 15] admin.system.users: clearing plan cache - collection info cache reset 2015-04-01T16:20:13.269+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:20:13.270+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:4b000 2015-04-01T16:20:13.270+0000 I INDEX [repl writer worker 15] build index on: admin.system.users properties: { v: 1, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" } 2015-04-01T16:20:13.270+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:20:13.270+0000 D STORAGE [repl writer worker 15] admin.system.users: clearing plan cache - collection info cache reset 2015-04-01T16:20:13.270+0000 D INDEX [repl writer worker 15] bulk commit starting for index: user_1_db_1 2015-04-01T16:20:13.270+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:20:13.270+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:20:13.270+0000 D STORAGE [repl writer worker 15] admin.system.users: clearing plan cache - collection info cache reset 2015-04-01T16:20:13.270+0000 D STORAGE [repl writer worker 15] admin.system.users: clearing plan cache - collection info cache reset 2015-04-01T16:20:13.270+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:20:13.271+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:20:13.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: "admin.bob" } 2015-04-01T16:20:13.272+0000 D REPL [SyncSourceFeedback] handshaking upstream updater 2015-04-01T16:20:13.272+0000 D REPL [SyncSourceFeedback] Sending to localhost:27017 (127.0.0.1) the replication handshake: { replSetUpdatePosition: 1, handshake: { handshake: ObjectId('551c1ab2ff257d5b3c9d1a53'), member: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } } 2015-04-01T16:20:13.272+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905213000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:20:13.273+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905213000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:20:14.913+0000 D QUERY [conn6] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:20:14.913+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:14.913+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:14.913+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:15.142+0000 D QUERY [conn8] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:20:15.142+0000 D COMMAND [conn8] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:15.142+0000 D COMMAND [conn8] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:15.142+0000 I COMMAND [conn8] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:15.220+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:15.220+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:15.220+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:17.220Z 2015-04-01T16:20:15.257+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:15.257+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:20:15.257+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:15.257+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:17.257Z 2015-04-01T16:20:16.914+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:16.914+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:16.914+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:17.142+0000 D COMMAND [conn8] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:17.142+0000 D COMMAND [conn8] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:17.142+0000 I COMMAND [conn8] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:17.220+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:17.220+0000 D REPL [ReplExecNetThread-1] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:17.221+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:19.220Z 2015-04-01T16:20:17.257+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:17.257+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:17.257+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:19.257Z 2015-04-01T16:20:18.915+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:18.915+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:18.915+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:19.142+0000 D COMMAND [conn8] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:19.142+0000 D COMMAND [conn8] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:19.142+0000 I COMMAND [conn8] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:19.221+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:19.221+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:20:19.221+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:19.221+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:21.221Z 2015-04-01T16:20:19.257+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:19.257+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:19.257+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:21.257Z 2015-04-01T16:20:20.916+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:20.916+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:20.916+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:21.142+0000 D COMMAND [conn8] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:21.142+0000 D COMMAND [conn8] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:21.142+0000 I COMMAND [conn8] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:21.221+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:21.221+0000 D REPL [ReplExecNetThread-1] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:21.221+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:23.221Z 2015-04-01T16:20:21.258+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:21.258+0000 D NETWORK [ReplExecNetThread-1] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:20:21.258+0000 D REPL [ReplExecNetThread-1] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:21.258+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:23.258Z 2015-04-01T16:20:22.917+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:22.917+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:22.917+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:23.142+0000 D COMMAND [conn8] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:23.142+0000 D COMMAND [conn8] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:23.142+0000 I COMMAND [conn8] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:23.221+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:23.221+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:23.221+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:25.221Z 2015-04-01T16:20:23.259+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:23.259+0000 D REPL [ReplExecNetThread-1] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:23.259+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:25.259Z 2015-04-01T16:20:24.918+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:24.918+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:24.918+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:25.142+0000 D COMMAND [conn8] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:25.142+0000 D COMMAND [conn8] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:25.142+0000 I COMMAND [conn8] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:25.221+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:25.221+0000 D NETWORK [ReplExecNetThread-1] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:20:25.221+0000 D REPL [ReplExecNetThread-1] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:25.222+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:27.222Z 2015-04-01T16:20:25.259+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:25.259+0000 D REPL [ReplExecNetThread-1] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:25.259+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:27.259Z 2015-04-01T16:20:26.919+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:26.919+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:26.919+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:27.142+0000 D COMMAND [conn8] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:27.142+0000 D COMMAND [conn8] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:27.142+0000 I COMMAND [conn8] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:27.223+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:27.224+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:27.224+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:29.224Z 2015-04-01T16:20:27.259+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:27.259+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:20:27.259+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:27.259+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:29.259Z 2015-04-01T16:20:28.920+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:28.920+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:28.920+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:29.142+0000 D COMMAND [conn8] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:29.142+0000 D COMMAND [conn8] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:29.142+0000 I COMMAND [conn8] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:29.224+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:29.224+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:29.225+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:31.225Z 2015-04-01T16:20:29.259+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:29.259+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:29.259+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:31.259Z 2015-04-01T16:20:30.920+0000 D COMMAND [conn6] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:30.920+0000 D COMMAND [conn6] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:30.920+0000 I COMMAND [conn6] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:31.142+0000 D COMMAND [conn8] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:31.142+0000 D COMMAND [conn8] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:31.142+0000 I COMMAND [conn8] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:31.225+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:31.225+0000 D NETWORK [ReplExecNetThread-1] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:20:31.225+0000 D REPL [ReplExecNetThread-1] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:31.226+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:33.226Z 2015-04-01T16:20:31.259+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:31.259+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:31.259+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:33.259Z 2015-04-01T16:20:32.921+0000 D NETWORK [conn6] SocketException: remote: 127.0.0.1:62932 error: 9001 socket exception [CLOSED] server [127.0.0.1:62932] 2015-04-01T16:20:32.921+0000 I NETWORK [conn6] end connection 127.0.0.1:62932 (1 connection now open) 2015-04-01T16:20:32.922+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62961 #11 (2 connections now open) 2015-04-01T16:20:32.926+0000 D QUERY [conn11] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:20:32.926+0000 D COMMAND [conn11] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D5746684845795445587633425A345735714B4A7A77525367713871796D6B4F34) } 2015-04-01T16:20:32.926+0000 I COMMAND [conn11] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D5746684845795445587633425A345735714B4A7A77525367713871796D6B4F34) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:20:32.954+0000 D COMMAND [conn11] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D5746684845795445587633425A345735714B4A7A77525367713871796D6B4F346F5A5069764E742F7A70684B583352423878584C703044744E794D3544...), conversationId: 1 } 2015-04-01T16:20:32.954+0000 I COMMAND [conn11] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D5746684845795445587633425A345735714B4A7A77525367713871796D6B4F346F5A5069764E742F7A70684B583352423878584C703044744E794D3544...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:20:32.955+0000 D COMMAND [conn11] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:20:32.955+0000 I ACCESS [conn11] Successfully authenticated as principal __system on local 2015-04-01T16:20:32.955+0000 I COMMAND [conn11] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:20:32.956+0000 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:32.956+0000 D COMMAND [conn11] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:32.956+0000 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:33.146+0000 D NETWORK [conn8] SocketException: remote: 127.0.0.1:62941 error: 9001 socket exception [CLOSED] server [127.0.0.1:62941] 2015-04-01T16:20:33.146+0000 I NETWORK [conn8] end connection 127.0.0.1:62941 (1 connection now open) 2015-04-01T16:20:33.159+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62963 #12 (2 connections now open) 2015-04-01T16:20:33.163+0000 D QUERY [conn12] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:20:33.163+0000 D COMMAND [conn12] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D794C46664967343151576931494650456A794868714233596F6A6434674D566A) } 2015-04-01T16:20:33.163+0000 I COMMAND [conn12] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D794C46664967343151576931494650456A794868714233596F6A6434674D566A) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:20:33.191+0000 D COMMAND [conn12] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D794C46664967343151576931494650456A794868714233596F6A6434674D566A764F757A4649526D6A42677A6B36525335616D7A555561766258456E6E...), conversationId: 1 } 2015-04-01T16:20:33.191+0000 I COMMAND [conn12] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D794C46664967343151576931494650456A794868714233596F6A6434674D566A764F757A4649526D6A42677A6B36525335616D7A555561766258456E6E...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:20:33.191+0000 D COMMAND [conn12] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:20:33.191+0000 I ACCESS [conn12] Successfully authenticated as principal __system on local 2015-04-01T16:20:33.191+0000 I COMMAND [conn12] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:20:33.191+0000 D COMMAND [conn12] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:33.192+0000 D COMMAND [conn12] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:33.192+0000 I COMMAND [conn12] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:33.226+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:33.227+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:20:33.227+0000 D NETWORK [ReplExecNetThread-1] connected to server localhost:27017 (127.0.0.1) 2015-04-01T16:20:33.231+0000 W NETWORK [ReplExecNetThread-1] The server certificate does not match the host name localhost 2015-04-01T16:20:33.261+0000 D REPL [ReplExecNetThread-1] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:33.261+0000 D REPL [ReplExecNetThread-1] thread shutting down 2015-04-01T16:20:33.262+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:35.262Z 2015-04-01T16:20:33.262+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:33.263+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:20:33.263+0000 D NETWORK [ReplExecNetThread-0] connected to server localhost:27019 (127.0.0.1) 2015-04-01T16:20:33.267+0000 W NETWORK [ReplExecNetThread-0] The server certificate does not match the host name localhost 2015-04-01T16:20:33.297+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:33.297+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:35.297Z 2015-04-01T16:20:34.957+0000 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:34.957+0000 D COMMAND [conn11] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:34.957+0000 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:35.193+0000 D COMMAND [conn12] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:35.193+0000 D COMMAND [conn12] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:35.193+0000 I COMMAND [conn12] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:35.262+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:35.262+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:35.263+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:37.263Z 2015-04-01T16:20:35.297+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:35.297+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:35.297+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:37.297Z 2015-04-01T16:20:36.957+0000 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:36.957+0000 D COMMAND [conn11] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:36.957+0000 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:37.195+0000 D COMMAND [conn12] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:37.195+0000 D COMMAND [conn12] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:37.195+0000 I COMMAND [conn12] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:37.263+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:37.263+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:37.263+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:39.263Z 2015-04-01T16:20:37.299+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:37.301+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:37.301+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:39.301Z 2015-04-01T16:20:38.957+0000 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:38.957+0000 D COMMAND [conn11] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:38.957+0000 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:39.196+0000 D COMMAND [conn12] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:39.196+0000 D COMMAND [conn12] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:39.196+0000 I COMMAND [conn12] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:39.264+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:39.264+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:20:39.264+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:39.265+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:41.265Z 2015-04-01T16:20:39.301+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:39.301+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:20:39.301+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:39.301+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:41.301Z 2015-04-01T16:20:40.959+0000 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:40.959+0000 D COMMAND [conn11] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:40.959+0000 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:41.197+0000 D COMMAND [conn12] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:41.197+0000 D COMMAND [conn12] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:41.197+0000 I COMMAND [conn12] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:41.265+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:41.265+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:41.266+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:43.265Z 2015-04-01T16:20:41.301+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:41.301+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:41.301+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:43.301Z 2015-04-01T16:20:42.960+0000 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:42.960+0000 D COMMAND [conn11] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:42.960+0000 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:43.197+0000 D COMMAND [conn12] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:43.197+0000 D COMMAND [conn12] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:43.197+0000 I COMMAND [conn12] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:43.265+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:43.265+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:43.265+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:45.265Z 2015-04-01T16:20:43.301+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:43.301+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:43.301+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:45.301Z 2015-04-01T16:20:44.960+0000 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:44.960+0000 D COMMAND [conn11] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:44.960+0000 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:45.198+0000 D COMMAND [conn12] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:45.198+0000 D COMMAND [conn12] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:45.198+0000 I COMMAND [conn12] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:45.266+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:45.266+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:20:45.266+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:45.267+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:47.267Z 2015-04-01T16:20:45.302+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:45.302+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:20:45.303+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:45.305+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:47.305Z 2015-04-01T16:20:46.961+0000 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:46.961+0000 D COMMAND [conn11] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:46.961+0000 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:47.198+0000 D COMMAND [conn12] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:47.198+0000 D COMMAND [conn12] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:47.198+0000 I COMMAND [conn12] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:47.267+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:47.267+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:47.268+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:49.268Z 2015-04-01T16:20:47.306+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:47.306+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:47.306+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:49.306Z 2015-04-01T16:20:48.962+0000 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:48.962+0000 D COMMAND [conn11] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:48.962+0000 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:49.199+0000 D COMMAND [conn12] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:49.199+0000 D COMMAND [conn12] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:49.199+0000 I COMMAND [conn12] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:49.268+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:49.268+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:49.269+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:51.268Z 2015-04-01T16:20:49.306+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:49.306+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:49.306+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:51.306Z 2015-04-01T16:20:50.962+0000 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:50.962+0000 D COMMAND [conn11] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:50.962+0000 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:51.199+0000 D COMMAND [conn12] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:51.199+0000 D COMMAND [conn12] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:51.199+0000 I COMMAND [conn12] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:51.268+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:51.268+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:20:51.268+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:51.268+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:53.268Z 2015-04-01T16:20:51.306+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:51.306+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:20:51.306+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:51.306+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:53.306Z 2015-04-01T16:20:52.962+0000 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:52.962+0000 D COMMAND [conn11] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:52.962+0000 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:53.199+0000 D COMMAND [conn12] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:53.199+0000 D COMMAND [conn12] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:53.199+0000 I COMMAND [conn12] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:53.268+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:53.268+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:53.268+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:55.268Z 2015-04-01T16:20:53.306+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:53.306+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:53.306+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:55.306Z 2015-04-01T16:20:54.962+0000 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:54.962+0000 D COMMAND [conn11] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:54.962+0000 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:55.199+0000 D COMMAND [conn12] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:55.199+0000 D COMMAND [conn12] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:55.199+0000 I COMMAND [conn12] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:55.268+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:55.268+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:55.268+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:57.268Z 2015-04-01T16:20:55.306+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:55.306+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:55.306+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:57.306Z 2015-04-01T16:20:56.962+0000 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:56.962+0000 D COMMAND [conn11] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:56.962+0000 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:57.199+0000 D COMMAND [conn12] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:57.199+0000 D COMMAND [conn12] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:57.199+0000 I COMMAND [conn12] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:57.268+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:57.268+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:20:57.268+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:57.268+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:20:59.268Z 2015-04-01T16:20:57.306+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:57.306+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:20:57.306+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:57.306+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:20:59.306Z 2015-04-01T16:20:58.963+0000 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:58.963+0000 D COMMAND [conn11] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:20:58.963+0000 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:59.060+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62972 #13 (3 connections now open) 2015-04-01T16:20:59.213+0000 D COMMAND [conn12] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:59.213+0000 D COMMAND [conn12] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:20:59.213+0000 I COMMAND [conn12] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:20:59.268+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:20:59.269+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:20:59.269+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:01.269Z 2015-04-01T16:20:59.327+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:20:59.327+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:20:59.327+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:01.327Z 2015-04-01T16:20:59.419+0000 D QUERY [conn13] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:20:59.419+0000 D COMMAND [conn13] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:20:59.420+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 1ms 2015-04-01T16:20:59.467+0000 D COMMAND [conn13] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:20:59.467+0000 I COMMAND [conn13] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:20:59.476+0000 D COMMAND [conn13] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D26743021243226487250617338254F76582E5931) } 2015-04-01T16:20:59.476+0000 D QUERY [conn13] Using idhack: query: { _id: "authSchema" } sort: {} projection: {} skip: 0 limit: 0 2015-04-01T16:20:59.476+0000 D QUERY [conn13] Relevant index 0 is kp: { user: 1, db: 1 } io: { v: 1, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" } 2015-04-01T16:20:59.477+0000 D QUERY [conn13] Only one plan is available; it will be run but will not be cached. query: { user: "bob", db: "admin" } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { user: 1, db: 1 } 2015-04-01T16:20:59.478+0000 I COMMAND [conn13] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D26743021243226487250617338254F76582E5931) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:167 locks:{} 1ms 2015-04-01T16:20:59.619+0000 D COMMAND [conn13] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D26743021243226487250617338254F76582E5931314D344A623374475A4E4D70457261616B6753703459696161394D486C4269412C703D614C77753661...) } 2015-04-01T16:20:59.619+0000 I COMMAND [conn13] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D26743021243226487250617338254F76582E5931314D344A623374475A4E4D70457261616B6753703459696161394D486C4269412C703D614C77753661...) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:20:59.620+0000 D COMMAND [conn13] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } 2015-04-01T16:20:59.620+0000 D QUERY [conn13] Relevant index 0 is kp: { user: 1, db: 1 } io: { v: 1, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" } 2015-04-01T16:20:59.620+0000 D QUERY [conn13] Only one plan is available; it will be run but will not be cached. query: { user: "bob", db: "admin" } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { user: 1, db: 1 } 2015-04-01T16:20:59.621+0000 I ACCESS [conn13] Successfully authenticated as principal bob on admin 2015-04-01T16:20:59.621+0000 I COMMAND [conn13] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:20:59.630+0000 D COMMAND [conn13] run command admin.$cmd { getLastError: 1 } 2015-04-01T16:20:59.630+0000 I COMMAND [conn13] command admin.$cmd command: getLastError { getLastError: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:110 locks:{} 0ms 2015-04-01T16:20:59.636+0000 D COMMAND [conn13] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:20:59.636+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:20:59.638+0000 D COMMAND [conn13] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:20:59.638+0000 I COMMAND [conn13] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:21:00.148+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:00.150+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:00.150+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:00.162+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:00.169+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.007 secs 2015-04-01T16:21:00.179+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:00.179+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:00.180+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:00.180+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:00.180+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "AggregateOperationTests" } 2015-04-01T16:21:00.180+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.AggregateOperationTests {} 2015-04-01T16:21:00.180+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:00.180+0000 D STORAGE [repl writer worker 15] Tests04011620.AggregateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:00.180+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:00.180+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:00.180+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:00.180+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:00.181+0000 D STORAGE [repl writer worker 15] Tests04011620.AggregateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:00.182+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:00.182+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905260000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:00.183+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:00.183+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:00.183+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:00.183+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:00.183+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:00.183+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:00.184+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905260000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:00.388+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:00.389+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:00.389+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:00.389+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:00.389+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:00.493+0000 D REPL [rsBackgroundSync] bgsync buffer has 488 bytes 2015-04-01T16:21:00.963+0000 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:00.963+0000 D COMMAND [conn11] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:00.963+0000 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:01.215+0000 D COMMAND [conn12] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:01.215+0000 D COMMAND [conn12] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:01.215+0000 I COMMAND [conn12] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:01.269+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:01.270+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:01.270+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:03.270Z 2015-04-01T16:21:01.327+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:01.327+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:01.327+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:03.327Z 2015-04-01T16:21:02.120+0000 D REPL [rsBackgroundSync] bgsync buffer has 16779456 bytes 2015-04-01T16:21:02.239+0000 D REPL [rsBackgroundSync] bgsync buffer has 16781261 bytes 2015-04-01T16:21:02.275+0000 D REPL [rsBackgroundSync] bgsync buffer has 16783080 bytes 2015-04-01T16:21:02.298+0000 D REPL [rsBackgroundSync] bgsync buffer has 16784915 bytes 2015-04-01T16:21:02.331+0000 D REPL [rsBackgroundSync] bgsync buffer has 16786726 bytes 2015-04-01T16:21:02.364+0000 D REPL [rsBackgroundSync] bgsync buffer has 16788551 bytes 2015-04-01T16:21:02.382+0000 D REPL [rsBackgroundSync] bgsync buffer has 16790362 bytes 2015-04-01T16:21:02.411+0000 D REPL [rsBackgroundSync] bgsync buffer has 16792187 bytes 2015-04-01T16:21:02.434+0000 D REPL [rsBackgroundSync] bgsync buffer has 16794012 bytes 2015-04-01T16:21:02.460+0000 D REPL [rsBackgroundSync] bgsync buffer has 16795789 bytes 2015-04-01T16:21:02.484+0000 D REPL [rsBackgroundSync] bgsync buffer has 16797654 bytes 2015-04-01T16:21:02.504+0000 D REPL [rsBackgroundSync] bgsync buffer has 16799449 bytes 2015-04-01T16:21:02.533+0000 D REPL [rsBackgroundSync] bgsync buffer has 16801236 bytes 2015-04-01T16:21:02.564+0000 D REPL [rsBackgroundSync] bgsync buffer has 16803037 bytes 2015-04-01T16:21:02.593+0000 D REPL [rsBackgroundSync] bgsync buffer has 16804917 bytes 2015-04-01T16:21:02.963+0000 D NETWORK [conn11] SocketException: remote: 127.0.0.1:62961 error: 9001 socket exception [CLOSED] server [127.0.0.1:62961] 2015-04-01T16:21:02.963+0000 I NETWORK [conn11] end connection 127.0.0.1:62961 (2 connections now open) 2015-04-01T16:21:02.963+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62975 #14 (3 connections now open) 2015-04-01T16:21:03.215+0000 D NETWORK [conn12] SocketException: remote: 127.0.0.1:62963 error: 9001 socket exception [CLOSED] server [127.0.0.1:62963] 2015-04-01T16:21:03.215+0000 I NETWORK [conn12] end connection 127.0.0.1:62963 (2 connections now open) 2015-04-01T16:21:03.215+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62977 #15 (3 connections now open) 2015-04-01T16:21:03.270+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:03.270+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:21:03.270+0000 D NETWORK [ReplExecNetThread-0] connected to server localhost:27017 (127.0.0.1) 2015-04-01T16:21:03.273+0000 W NETWORK [ReplExecNetThread-0] The server certificate does not match the host name localhost 2015-04-01T16:21:03.327+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:03.327+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:21:03.327+0000 D NETWORK [ReplExecNetThread-2] connected to server localhost:27019 (127.0.0.1) 2015-04-01T16:21:03.330+0000 W NETWORK [ReplExecNetThread-2] The server certificate does not match the host name localhost 2015-04-01T16:21:04.910+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:04.910+0000 I STORAGE [DataFileSync] flushing mmaps took 2840ms for 7 files 2015-04-01T16:21:04.922+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:04.953+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:04.960+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:04.960+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:04.960+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:04.965+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:04.978+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:04.978+0000 D QUERY [conn14] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:04.978+0000 D COMMAND [conn14] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D732F31372F6B3976396A4A4A7564477A4C74794E43494E6B594D564D6D6A6C6B) } 2015-04-01T16:21:04.978+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905260000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:04.978+0000 I COMMAND [conn14] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D732F31372F6B3976396A4A4A7564477A4C74794E43494E6B594D564D6D6A6C6B) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:21:04.979+0000 D QUERY [conn15] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:04.979+0000 D COMMAND [conn15] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4D6C6A3446746775764F4C796A73364C4E664C6C43706344635371796F444E47) } 2015-04-01T16:21:04.979+0000 I COMMAND [conn15] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4D6C6A3446746775764F4C796A73364C4E664C6C43706344635371796F444E47) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:21:05.009+0000 D COMMAND [conn14] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D732F31372F6B3976396A4A4A7564477A4C74794E43494E6B594D564D6D6A6C6B6959723774554942456D7458355377724F4744424247764E654C46336F...), conversationId: 1 } 2015-04-01T16:21:05.009+0000 I COMMAND [conn14] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D732F31372F6B3976396A4A4A7564477A4C74794E43494E6B594D564D6D6A6C6B6959723774554942456D7458355377724F4744424247764E654C46336F...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:21:05.009+0000 D COMMAND [conn15] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D4D6C6A3446746775764F4C796A73364C4E664C6C43706344635371796F444E474C69366C4671594D6A796F626C366F54586B78726764634E63582B5638...), conversationId: 1 } 2015-04-01T16:21:05.009+0000 D COMMAND [conn14] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:21:05.009+0000 I ACCESS [conn14] Successfully authenticated as principal __system on local 2015-04-01T16:21:05.009+0000 I COMMAND [conn14] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:21:05.009+0000 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:05.009+0000 D COMMAND [conn14] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:05.010+0000 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:05.009+0000 I COMMAND [conn15] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4D6C6A3446746775764F4C796A73364C4E664C6C43706344635371796F444E474C69366C4671594D6A796F626C366F54586B78726764634E63582B5638...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:21:05.010+0000 D COMMAND [conn15] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:21:05.010+0000 I ACCESS [conn15] Successfully authenticated as principal __system on local 2015-04-01T16:21:05.010+0000 I COMMAND [conn15] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:21:05.011+0000 D COMMAND [conn15] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:05.011+0000 D COMMAND [conn15] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:05.011+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:05.011+0000 I COMMAND [conn15] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:05.011+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:05.011+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:05.044+0000 D JOURNAL [journal writer] lsn set 61075 2015-04-01T16:21:05.046+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:05.048+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.002 secs 2015-04-01T16:21:05.050+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:05.050+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:05.050+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:05.050+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:05.050+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "AggregateToCollectionOperationTests" } 2015-04-01T16:21:05.050+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.AggregateToCollectionOperationTests {} 2015-04-01T16:21:05.050+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:05.050+0000 D STORAGE [repl writer worker 15] Tests04011620.AggregateToCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:05.051+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:05.051+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:05.051+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:05.051+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:05.051+0000 D STORAGE [repl writer worker 15] Tests04011620.AggregateToCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:05.051+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905260000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:05.051+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:05.052+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:05.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:05.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:05.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:05.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:05.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:05.052+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905260000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:05.052+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:05.053+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:05.053+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "tmp.agg_out.1", temp: true } 2015-04-01T16:21:05.053+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.tmp.agg_out.1 { temp: true } 2015-04-01T16:21:05.053+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:29000 2015-04-01T16:21:05.053+0000 D STORAGE [repl writer worker 15] Tests04011620.tmp.agg_out.1: clearing plan cache - collection info cache reset 2015-04-01T16:21:05.053+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:05.053+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:2b000 2015-04-01T16:21:05.053+0000 D STORAGE [repl writer worker 15] Tests04011620.tmp.agg_out.1: clearing plan cache - collection info cache reset 2015-04-01T16:21:05.053+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905260000|14, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:05.054+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:05.054+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:05.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:05.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:05.054+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905260000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:05.054+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:05.055+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:05.055+0000 D COMMAND [repl writer worker 15] run command admin.$cmd { renameCollection: "Tests04011620.tmp.agg_out.1", to: "Tests04011620.awesome", dropTarget: true } 2015-04-01T16:21:05.055+0000 D COMMAND [repl writer worker 15] command: { renameCollection: "Tests04011620.tmp.agg_out.1", to: "Tests04011620.awesome", dropTarget: true } 2015-04-01T16:21:05.055+0000 D STORAGE [repl writer worker 15] Tests04011620.awesome: clearing plan cache - collection info cache reset 2015-04-01T16:21:05.055+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905260000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:05.055+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:05.056+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:05.056+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:05.056+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:05.056+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:05.111+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:05.148+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:05.149+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:05.150+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:07.150Z 2015-04-01T16:21:05.150+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:05.161+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:05.161+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:05.162+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:05.166+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:05.167+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:05.168+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905260000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:05.168+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:05.168+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:05.168+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:05.185+0000 D JOURNAL [journal writer] lsn set 61225 2015-04-01T16:21:05.187+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:05.190+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.002 secs 2015-04-01T16:21:05.192+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:05.192+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:05.192+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:05.192+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:05.192+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:05.193+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:05.193+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:05.193+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:05.193+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:05.193+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:05.193+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:05.193+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:05.193+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:05.193+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905260000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:05.193+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:05.194+0000 D REPL [rsSync] replication batch size is 7 2015-04-01T16:21:05.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:05.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:05.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:05.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:05.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:05.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:05.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 7 } 2015-04-01T16:21:05.239+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:05.239+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.1, filling with zeroes... 2015-04-01T16:21:05.242+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.1, size: 511MB, took 0.002 secs 2015-04-01T16:21:05.242+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:276824064 fromFreeList: 0 eloc: 1:2000 2015-04-01T16:21:06.157+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905261000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.159+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.162+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.162+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.162+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.162+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.162+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.162+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.162+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.163+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.163+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.164+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.164+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.164+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.164+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.164+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.164+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.164+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:06.165+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.165+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.165+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.165+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:06.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.166+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.166+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.167+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.167+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.167+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.167+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.167+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.167+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.167+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.167+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.167+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.168+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.168+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.168+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.168+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.168+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.168+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.168+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:06.168+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.168+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.168+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.170+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.170+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.171+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 1:10802000 2015-04-01T16:21:06.171+0000 I INDEX [repl writer worker 15] build index on: Tests04011620.BulkMixedWriteOperationTests properties: { v: 1, unique: true, key: { x: 1 }, name: "x_1", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.171+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:06.171+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.171+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:06.171+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:06.171+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:06.171+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.171+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.172+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.172+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.173+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.173+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.174+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.174+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.174+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.174+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.174+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.174+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.174+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.175+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, unique: true, key: { x: 1 }, name: "x_1", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.175+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.175+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.175+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.176+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.176+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.176+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.176+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.176+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.176+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.177+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.177+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.177+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.177+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.178+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.178+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:06.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.179+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.179+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.179+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.179+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.179+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.179+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.179+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.179+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.180+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.180+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.180+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.180+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.180+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.180+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.180+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.181+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.181+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.181+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.181+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.181+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.181+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.181+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.182+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:06.182+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.182+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.182+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.182+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|20, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.182+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.183+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.183+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.183+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.183+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.183+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.183+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.183+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.184+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.184+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.184+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.185+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.185+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.185+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.185+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.185+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.185+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.185+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.185+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.186+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.186+0000 D REPL [rsSync] replication batch size is 9 2015-04-01T16:21:06.186+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.187+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.187+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.187+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.187+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.187+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.187+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.187+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.188+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.188+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|31, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.188+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.188+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.188+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.189+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.189+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.189+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.189+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.189+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.189+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|32, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.189+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.189+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.190+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.190+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.190+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.190+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.190+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.190+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.190+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.190+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|33, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.191+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.191+0000 D REPL [rsSync] replication batch size is 7 2015-04-01T16:21:06.191+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1aeeb5355f778169cfef') } 2015-04-01T16:21:06.192+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|40, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.192+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.192+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.193+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.193+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.193+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.193+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.193+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.193+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.193+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|41, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.193+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.193+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.193+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.194+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.194+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.194+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.194+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.194+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.194+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.194+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|42, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.194+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.195+0000 D REPL [rsSync] replication batch size is 7 2015-04-01T16:21:06.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.195+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|49, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.195+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.195+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.195+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.195+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.195+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.195+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.195+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.195+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.195+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|50, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.195+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.197+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.197+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.197+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.197+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.197+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.197+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.197+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.197+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.197+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|51, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.198+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.198+0000 D REPL [rsSync] replication batch size is 9 2015-04-01T16:21:06.198+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.198+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.198+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.199+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|60, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.199+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.200+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.200+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.200+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.200+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.200+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.200+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.200+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.200+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|61, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.200+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.200+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.200+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.201+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.201+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.201+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.201+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.201+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.201+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.201+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|62, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.201+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.201+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:06.202+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.202+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.202+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|64, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.202+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.202+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.202+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.202+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.202+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.202+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.202+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.203+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.203+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|65, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.203+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.203+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.203+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.203+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.203+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.203+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.204+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.204+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.204+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.204+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|66, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.204+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.205+0000 D REPL [rsSync] replication batch size is 8 2015-04-01T16:21:06.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.206+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|74, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.206+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.206+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.206+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.206+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.206+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.206+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.206+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.206+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.207+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|75, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.207+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.207+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.207+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.207+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.207+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.207+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.207+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.207+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.207+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.208+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|76, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.208+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.208+0000 D REPL [rsSync] replication batch size is 9 2015-04-01T16:21:06.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 7 } 2015-04-01T16:21:06.209+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|85, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.210+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.210+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.210+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.210+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.210+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.210+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.210+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.210+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.210+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|86, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.210+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.211+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.211+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.211+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.211+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.211+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.211+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.211+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.211+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.212+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|87, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.212+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.212+0000 D REPL [rsSync] replication batch size is 9 2015-04-01T16:21:06.212+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 7 } 2015-04-01T16:21:06.213+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|96, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.213+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.214+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.214+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.214+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.214+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.214+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.214+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.214+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.215+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|97, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.215+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.215+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.215+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.215+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.215+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.215+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.215+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.215+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.215+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.216+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|98, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.216+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.216+0000 D REPL [rsSync] replication batch size is 10 2015-04-01T16:21:06.216+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1aeeb5355f778169cff0') } 2015-04-01T16:21:06.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1aeeb5355f778169cff1') } 2015-04-01T16:21:06.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 7 } 2015-04-01T16:21:06.218+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|108, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.218+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.218+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.218+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.218+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.218+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.218+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.218+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.218+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.218+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|109, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.219+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.219+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.219+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.219+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.219+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.219+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.219+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.219+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.219+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.220+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|110, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.220+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.220+0000 D REPL [rsSync] replication batch size is 10 2015-04-01T16:21:06.220+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.221+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.221+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.221+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.221+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.221+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.221+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1aeeb5355f778169cff2') } 2015-04-01T16:21:06.221+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.221+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 7 } 2015-04-01T16:21:06.221+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1aeeb5355f778169cff3') } 2015-04-01T16:21:06.222+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|120, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.222+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.222+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.222+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.222+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.222+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.222+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.222+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.222+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.223+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|121, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.223+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.223+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.223+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.223+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.223+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.223+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.223+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.223+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.223+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.224+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|122, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.224+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.224+0000 D REPL [rsSync] replication batch size is 9 2015-04-01T16:21:06.224+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.225+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.225+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.225+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.225+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.225+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.225+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.225+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.225+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.225+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|131, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.225+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.226+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.226+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.226+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.226+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.226+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.226+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.226+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.226+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|132, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.226+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.226+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.227+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.227+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.227+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.227+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.227+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.227+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.227+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.227+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|133, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.227+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.228+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:06.228+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.228+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.228+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.229+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.229+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|137, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.229+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.229+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.230+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.230+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.230+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.230+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.230+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.230+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.230+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|138, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.230+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.231+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.231+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.231+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.231+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.231+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.231+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.231+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.232+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.232+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|139, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.232+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.233+0000 D REPL [rsSync] replication batch size is 10 2015-04-01T16:21:06.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.234+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.234+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.234+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.234+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.234+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.234+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.234+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.235+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.235+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.235+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|149, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.235+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.236+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.236+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.236+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.236+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.236+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.236+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.236+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.236+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|150, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.236+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.237+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.237+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.237+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.237+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.237+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.237+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.237+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.238+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.238+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|151, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.238+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.239+0000 D REPL [rsSync] replication batch size is 8 2015-04-01T16:21:06.239+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.239+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.240+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.240+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.240+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.240+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.240+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.240+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.241+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|159, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.241+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.241+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.241+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.241+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.241+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.242+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.242+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.242+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.242+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|160, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.242+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.243+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.243+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.243+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.243+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.243+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.243+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.243+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.243+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.244+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|161, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.244+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.245+0000 D REPL [rsSync] replication batch size is 7 2015-04-01T16:21:06.245+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.245+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.245+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.245+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.245+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.246+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.246+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.246+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|168, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.246+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.247+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.247+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.247+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.247+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.247+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.247+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.247+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.247+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|169, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.247+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.248+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.248+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.248+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.248+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.248+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.248+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.248+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.249+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.249+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|170, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.249+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.250+0000 D REPL [rsSync] replication batch size is 9 2015-04-01T16:21:06.250+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.251+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.251+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.251+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.251+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.251+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.251+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.251+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.252+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.252+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|179, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.252+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.253+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.253+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.253+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.253+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.253+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.253+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.253+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.253+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|180, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.253+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.254+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.254+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.254+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.254+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.254+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.254+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.254+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.254+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.255+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|181, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.255+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.256+0000 D REPL [rsSync] replication batch size is 6 2015-04-01T16:21:06.256+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.256+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.256+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.256+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.257+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.257+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.257+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|187, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.257+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.258+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.258+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.258+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.258+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.258+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.258+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.258+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.258+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|188, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.258+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.259+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.259+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.259+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.259+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.259+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.259+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.259+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.260+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.260+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|189, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.260+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.261+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.261+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.261+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|190, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.261+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.262+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.262+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.262+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.262+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.262+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.262+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.262+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.263+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|191, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.263+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.263+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.263+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.263+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.263+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.263+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.264+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.264+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.264+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.264+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|192, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.264+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.265+0000 D REPL [rsSync] replication batch size is 7 2015-04-01T16:21:06.265+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.265+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.266+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|199, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.266+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.267+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.267+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.267+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.267+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.267+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.267+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.267+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.268+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|200, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.268+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.268+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.268+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.268+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.268+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.269+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.269+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.269+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.269+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.269+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|201, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.270+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.270+0000 D REPL [rsSync] replication batch size is 9 2015-04-01T16:21:06.270+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.272+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.272+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|210, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.272+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.272+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.272+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.273+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.273+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.BulkMixedWriteOperationTests 2015-04-01T16:21:06.273+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.273+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.273+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:06.273+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|211, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.273+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.273+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:06.274+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "BulkMixedWriteOperationTests" } 2015-04-01T16:21:06.274+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.BulkMixedWriteOperationTests {} 2015-04-01T16:21:06.274+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:06.274+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.274+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:06.274+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 1:10802000 2015-04-01T16:21:06.274+0000 D STORAGE [repl writer worker 15] Tests04011620.BulkMixedWriteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:06.274+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|212, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:06.275+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:06.276+0000 D REPL [rsSync] replication batch size is 6 2015-04-01T16:21:06.276+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:06.276+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:06.276+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:06.276+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:06.277+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:06.277+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:06.277+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905262000|218, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:07.011+0000 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:07.011+0000 D COMMAND [conn14] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:07.011+0000 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:07.012+0000 D COMMAND [conn15] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:07.012+0000 D COMMAND [conn15] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:07.012+0000 I COMMAND [conn15] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:07.150+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:07.150+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:07.150+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:09.150Z 2015-04-01T16:21:07.380+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:07.381+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:09.381Z 2015-04-01T16:21:07.383+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:07.383+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:07.383+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:07.383+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:07.384+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:08.186+0000 D REPL [rsBackgroundSync] bgsync buffer has 632 bytes 2015-04-01T16:21:08.525+0000 D REPL [rsBackgroundSync] bgsync buffer has 2557 bytes 2015-04-01T16:21:08.641+0000 D REPL [rsBackgroundSync] bgsync buffer has 4676 bytes 2015-04-01T16:21:09.011+0000 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:09.011+0000 D COMMAND [conn14] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:09.011+0000 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:09.012+0000 D COMMAND [conn15] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:09.012+0000 D COMMAND [conn15] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:09.012+0000 I COMMAND [conn15] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:09.073+0000 D COMMAND [conn13] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:21:09.075+0000 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 1ms 2015-04-01T16:21:09.075+0000 D COMMAND [conn13] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:21:09.075+0000 I COMMAND [conn13] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:21:09.150+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:09.150+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:21:09.151+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:09.151+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:11.151Z 2015-04-01T16:21:09.381+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:09.381+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:21:09.381+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:09.381+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:11.381Z 2015-04-01T16:21:09.624+0000 D REPL [rsBackgroundSync] bgsync buffer has 6573 bytes 2015-04-01T16:21:11.011+0000 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:11.011+0000 D COMMAND [conn14] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:11.011+0000 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:11.012+0000 D COMMAND [conn15] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:11.012+0000 D COMMAND [conn15] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:11.012+0000 I COMMAND [conn15] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:11.151+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:11.152+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:11.152+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:13.152Z 2015-04-01T16:21:11.302+0000 D REPL [rsBackgroundSync] bgsync buffer has 8371 bytes 2015-04-01T16:21:11.382+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:11.382+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:11.382+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:13.382Z 2015-04-01T16:21:11.775+0000 D REPL [rsBackgroundSync] bgsync buffer has 10183 bytes 2015-04-01T16:21:12.237+0000 D REPL [rsBackgroundSync] bgsync buffer has 12037 bytes 2015-04-01T16:21:12.472+0000 D REPL [rsBackgroundSync] bgsync buffer has 13907 bytes 2015-04-01T16:21:13.011+0000 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:13.011+0000 D COMMAND [conn14] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:13.011+0000 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:13.012+0000 D COMMAND [conn15] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:13.012+0000 D COMMAND [conn15] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:13.012+0000 I COMMAND [conn15] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:13.060+0000 D REPL [rsBackgroundSync] bgsync buffer has 15591 bytes 2015-04-01T16:21:13.153+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:13.153+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:13.153+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:15.153Z 2015-04-01T16:21:13.382+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:13.382+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:13.382+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:15.382Z 2015-04-01T16:21:14.398+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.399+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.403+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.409+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.409+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.409+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.412+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:14.413+0000 D REPL [rsBackgroundSync] bgsync buffer has 17631 bytes 2015-04-01T16:21:14.417+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:14.418+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.418+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905267000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.419+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.419+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:14.429+0000 D JOURNAL [journal writer] lsn set 70255 2015-04-01T16:21:14.434+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:14.438+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.003 secs 2015-04-01T16:21:14.441+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.441+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:14.442+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.442+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.442+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CountOperationTests" } 2015-04-01T16:21:14.442+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CountOperationTests {} 2015-04-01T16:21:14.443+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:14.443+0000 D STORAGE [repl writer worker 15] Tests04011620.CountOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.443+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.443+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:14.443+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.443+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:14.443+0000 D STORAGE [repl writer worker 15] Tests04011620.CountOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.444+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.445+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905267000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.445+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:14.445+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:14.445+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:14.445+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:14.445+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:14.445+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:14.447+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905267000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.447+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.447+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.448+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:14.448+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:14.448+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:14.471+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.472+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.474+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.477+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.477+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.478+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.479+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:14.480+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:14.481+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.482+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.482+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.483+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:14.494+0000 D JOURNAL [journal writer] lsn set 70325 2015-04-01T16:21:14.498+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:14.502+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.003 secs 2015-04-01T16:21:14.504+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.504+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:14.505+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.505+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.505+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CreateCollectionOperationTests" } 2015-04-01T16:21:14.506+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CreateCollectionOperationTests {} 2015-04-01T16:21:14.506+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:14.506+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.506+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.506+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:14.506+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.507+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:14.507+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.508+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.508+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.509+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.509+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "CreateCollectionOperationTests" } 2015-04-01T16:21:14.510+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.510+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.510+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.CreateCollectionOperationTests" } 2015-04-01T16:21:14.510+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.510+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:14.511+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.511+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.512+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.512+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CreateCollectionOperationTests", autoIndexId: false } 2015-04-01T16:21:14.512+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CreateCollectionOperationTests { autoIndexId: false } 2015-04-01T16:21:14.513+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:14.513+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.513+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.513+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.514+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.514+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "CreateCollectionOperationTests" } 2015-04-01T16:21:14.514+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.515+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.515+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:14.516+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.516+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.517+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.518+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CreateCollectionOperationTests", autoIndexId: true } 2015-04-01T16:21:14.518+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CreateCollectionOperationTests { autoIndexId: true } 2015-04-01T16:21:14.518+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:14.518+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.518+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.518+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:14.519+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.520+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.520+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.521+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.521+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "CreateCollectionOperationTests" } 2015-04-01T16:21:14.522+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.522+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.522+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.CreateCollectionOperationTests" } 2015-04-01T16:21:14.522+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.522+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:14.523+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.523+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.524+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.524+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CreateCollectionOperationTests", capped: false } 2015-04-01T16:21:14.524+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CreateCollectionOperationTests { capped: false } 2015-04-01T16:21:14.524+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:14.524+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.524+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.525+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:14.525+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.525+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.525+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.525+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.526+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "CreateCollectionOperationTests" } 2015-04-01T16:21:14.526+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.526+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.526+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.CreateCollectionOperationTests" } 2015-04-01T16:21:14.526+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.526+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:14.527+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.527+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.527+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.528+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CreateCollectionOperationTests", capped: true, size: 10000 } 2015-04-01T16:21:14.528+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CreateCollectionOperationTests { capped: true, size: 10000 } 2015-04-01T16:21:14.528+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:10240 fromFreeList: 0 eloc: 0:29000 2015-04-01T16:21:14.528+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.528+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.528+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:14.528+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.529+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.529+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.529+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.529+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "CreateCollectionOperationTests" } 2015-04-01T16:21:14.530+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.530+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.530+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.CreateCollectionOperationTests" } 2015-04-01T16:21:14.530+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.530+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:14.530+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.531+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.531+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.531+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CreateCollectionOperationTests", capped: true, size: 10000, max: 123 } 2015-04-01T16:21:14.531+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CreateCollectionOperationTests { capped: true, size: 10000, max: 123 } 2015-04-01T16:21:14.531+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:10240 fromFreeList: 0 eloc: 0:2c000 2015-04-01T16:21:14.531+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.532+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.532+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:14.532+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.533+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.533+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.534+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.534+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "CreateCollectionOperationTests" } 2015-04-01T16:21:14.534+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.534+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.534+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.CreateCollectionOperationTests" } 2015-04-01T16:21:14.534+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.534+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:14.534+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.534+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.535+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.535+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CreateCollectionOperationTests", capped: true, size: 10000 } 2015-04-01T16:21:14.535+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CreateCollectionOperationTests { capped: true, size: 10000 } 2015-04-01T16:21:14.535+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:10240 fromFreeList: 0 eloc: 0:2f000 2015-04-01T16:21:14.536+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.536+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.536+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:14.536+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.536+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.537+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|14, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.537+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.537+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "CreateCollectionOperationTests" } 2015-04-01T16:21:14.537+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.538+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.538+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.CreateCollectionOperationTests" } 2015-04-01T16:21:14.538+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.538+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:14.538+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.539+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.539+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.539+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CreateCollectionOperationTests", flags: 0 } 2015-04-01T16:21:14.539+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CreateCollectionOperationTests { flags: 0 } 2015-04-01T16:21:14.539+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:14.539+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.540+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.540+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:14.540+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.540+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.541+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.541+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.541+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "CreateCollectionOperationTests" } 2015-04-01T16:21:14.541+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.541+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.CreateCollectionOperationTests 2015-04-01T16:21:14.541+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.CreateCollectionOperationTests" } 2015-04-01T16:21:14.541+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.541+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:14.541+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.541+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.542+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.542+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CreateCollectionOperationTests", flags: 1 } 2015-04-01T16:21:14.543+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CreateCollectionOperationTests { flags: 1 } 2015-04-01T16:21:14.543+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:14.543+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.543+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.543+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:14.543+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.543+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.544+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.544+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.544+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CreateIndexesOperationTests" } 2015-04-01T16:21:14.544+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CreateIndexesOperationTests {} 2015-04-01T16:21:14.545+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:32000 2015-04-01T16:21:14.545+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.545+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.545+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:34000 2015-04-01T16:21:14.545+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.545+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.545+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.546+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.546+0000 D COMMAND [repl index builder 2] BackgroundJob starting: repl index builder 2 2015-04-01T16:21:14.546+0000 D INDEX [repl index builder 2] IndexBuilder building index { ns: "Tests04011620.CreateIndexesOperationTests", key: { x: 1 }, name: "x_1", background: true } 2015-04-01T16:21:14.546+0000 D STORAGE [repl index builder 2] allocating new extent 2015-04-01T16:21:14.546+0000 D STORAGE [repl index builder 2] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:54000 2015-04-01T16:21:14.547+0000 I INDEX [repl index builder 2] build index on: Tests04011620.CreateIndexesOperationTests properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011620.CreateIndexesOperationTests", background: true } 2015-04-01T16:21:14.547+0000 D STORAGE [repl index builder 2] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.547+0000 I INDEX [repl index builder 2] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:14.547+0000 D STORAGE [repl index builder 2] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.547+0000 D STORAGE [repl index builder 2] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.547+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|20, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.547+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.548+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.549+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "CreateIndexesOperationTests" } 2015-04-01T16:21:14.549+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.CreateIndexesOperationTests 2015-04-01T16:21:14.549+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.CreateIndexesOperationTests 2015-04-01T16:21:14.549+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.CreateIndexesOperationTests" } 2015-04-01T16:21:14.549+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.549+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011620.CreateIndexesOperationTests", background: true } 2015-04-01T16:21:14.549+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.549+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:14.549+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.549+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.549+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.549+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CreateIndexesOperationTests" } 2015-04-01T16:21:14.549+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CreateIndexesOperationTests {} 2015-04-01T16:21:14.549+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:32000 2015-04-01T16:21:14.549+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.551+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.551+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:54000 2015-04-01T16:21:14.551+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.551+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.551+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.552+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.552+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.552+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:34000 2015-04-01T16:21:14.552+0000 I INDEX [repl writer worker 15] build index on: Tests04011620.CreateIndexesOperationTests properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011620.CreateIndexesOperationTests" } 2015-04-01T16:21:14.552+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:14.552+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.553+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:14.553+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:14.553+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:14.553+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.553+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.553+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|23, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.553+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.554+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.554+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "CreateIndexesOperationTests" } 2015-04-01T16:21:14.554+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.CreateIndexesOperationTests 2015-04-01T16:21:14.554+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.CreateIndexesOperationTests 2015-04-01T16:21:14.554+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.CreateIndexesOperationTests" } 2015-04-01T16:21:14.554+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.554+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011620.CreateIndexesOperationTests" } 2015-04-01T16:21:14.554+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.554+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:14.555+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|24, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.555+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.555+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.555+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CreateIndexesOperationTests" } 2015-04-01T16:21:14.556+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CreateIndexesOperationTests {} 2015-04-01T16:21:14.556+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:32000 2015-04-01T16:21:14.556+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.556+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.556+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:34000 2015-04-01T16:21:14.556+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.556+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|25, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.557+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.557+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.557+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.558+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:54000 2015-04-01T16:21:14.558+0000 I INDEX [repl writer worker 15] build index on: Tests04011620.CreateIndexesOperationTests properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011620.CreateIndexesOperationTests" } 2015-04-01T16:21:14.558+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:14.558+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.558+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:14.558+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:14.558+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:14.558+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.558+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.558+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|26, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.559+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.559+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.560+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.560+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:74000 2015-04-01T16:21:14.560+0000 I INDEX [repl writer worker 15] build index on: Tests04011620.CreateIndexesOperationTests properties: { v: 1, key: { y: 1 }, name: "y_1", ns: "Tests04011620.CreateIndexesOperationTests" } 2015-04-01T16:21:14.560+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:14.560+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.560+0000 D INDEX [repl writer worker 15] bulk commit starting for index: y_1 2015-04-01T16:21:14.560+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:14.560+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:14.560+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.560+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.560+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|27, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.560+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.561+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.561+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "CreateIndexesOperationTests" } 2015-04-01T16:21:14.561+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.CreateIndexesOperationTests 2015-04-01T16:21:14.561+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.CreateIndexesOperationTests 2015-04-01T16:21:14.561+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.CreateIndexesOperationTests" } 2015-04-01T16:21:14.561+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.562+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011620.CreateIndexesOperationTests" } 2015-04-01T16:21:14.562+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.562+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { y: 1 }, name: "y_1", ns: "Tests04011620.CreateIndexesOperationTests" } 2015-04-01T16:21:14.562+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.562+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:14.562+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|28, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.562+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.563+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.563+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CreateIndexesOperationTests" } 2015-04-01T16:21:14.563+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CreateIndexesOperationTests {} 2015-04-01T16:21:14.563+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:32000 2015-04-01T16:21:14.563+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.563+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.563+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:74000 2015-04-01T16:21:14.564+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.564+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|29, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.564+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.565+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.565+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.565+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:54000 2015-04-01T16:21:14.565+0000 I INDEX [repl writer worker 15] build index on: Tests04011620.CreateIndexesOperationTests properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011620.CreateIndexesOperationTests", expireAfterSeconds: 1.5 } 2015-04-01T16:21:14.565+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:14.565+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.565+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:14.565+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:14.565+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:14.565+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.565+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.565+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|30, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.566+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.566+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.566+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "CreateIndexesOperationTests" } 2015-04-01T16:21:14.566+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.CreateIndexesOperationTests 2015-04-01T16:21:14.567+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.CreateIndexesOperationTests 2015-04-01T16:21:14.567+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.CreateIndexesOperationTests" } 2015-04-01T16:21:14.567+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.567+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011620.CreateIndexesOperationTests", expireAfterSeconds: 1.5 } 2015-04-01T16:21:14.567+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.567+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:14.567+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|31, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.567+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.568+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.568+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CreateIndexesOperationTests" } 2015-04-01T16:21:14.568+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CreateIndexesOperationTests {} 2015-04-01T16:21:14.568+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:32000 2015-04-01T16:21:14.568+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.568+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.568+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:54000 2015-04-01T16:21:14.568+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.569+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|32, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.569+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.569+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.569+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.570+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:74000 2015-04-01T16:21:14.570+0000 I INDEX [repl writer worker 15] build index on: Tests04011620.CreateIndexesOperationTests properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011620.CreateIndexesOperationTests", sparse: true } 2015-04-01T16:21:14.570+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:14.570+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.570+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:14.570+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:14.570+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:14.570+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.570+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.570+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|33, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.570+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.571+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.571+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "CreateIndexesOperationTests" } 2015-04-01T16:21:14.571+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.CreateIndexesOperationTests 2015-04-01T16:21:14.571+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.CreateIndexesOperationTests 2015-04-01T16:21:14.571+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.CreateIndexesOperationTests" } 2015-04-01T16:21:14.571+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.571+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011620.CreateIndexesOperationTests", sparse: true } 2015-04-01T16:21:14.571+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.572+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:14.572+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|34, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.572+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.572+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.573+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "CreateIndexesOperationTests" } 2015-04-01T16:21:14.573+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.CreateIndexesOperationTests {} 2015-04-01T16:21:14.573+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:32000 2015-04-01T16:21:14.573+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.573+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.573+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:74000 2015-04-01T16:21:14.573+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.573+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|35, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.573+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.574+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.574+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.574+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:54000 2015-04-01T16:21:14.574+0000 I INDEX [repl writer worker 15] build index on: Tests04011620.CreateIndexesOperationTests properties: { v: 1, unique: true, key: { x: 1 }, name: "x_1", ns: "Tests04011620.CreateIndexesOperationTests" } 2015-04-01T16:21:14.575+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:14.575+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.575+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:14.575+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:14.575+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:14.575+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.575+0000 D STORAGE [repl writer worker 15] Tests04011620.CreateIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.575+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905268000|36, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.576+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.576+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.576+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:14.576+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:14.576+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:14.649+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.650+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.653+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.658+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.658+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.658+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.659+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:14.660+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:14.661+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905269000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.661+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.662+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.662+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\DatabaseExistsOperationTests.ns, filling with zeroes... 2015-04-01T16:21:14.671+0000 D JOURNAL [journal writer] lsn set 70495 2015-04-01T16:21:14.675+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\DatabaseExistsOperationTests.0, filling with zeroes... 2015-04-01T16:21:14.679+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\DatabaseExistsOperationTests.0, size: 16MB, took 0.003 secs 2015-04-01T16:21:14.680+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.680+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:14.680+0000 D STORAGE [repl writer worker 15] DatabaseExistsOperationTests.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.680+0000 D STORAGE [repl writer worker 15] DatabaseExistsOperationTests.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.680+0000 D COMMAND [repl writer worker 15] run command DatabaseExistsOperationTests.$cmd { create: "DatabaseExistsOperationTests" } 2015-04-01T16:21:14.680+0000 D STORAGE [repl writer worker 15] create collection DatabaseExistsOperationTests.DatabaseExistsOperationTests {} 2015-04-01T16:21:14.681+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:6000 2015-04-01T16:21:14.681+0000 D STORAGE [repl writer worker 15] DatabaseExistsOperationTests.DatabaseExistsOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.681+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.681+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:8000 2015-04-01T16:21:14.681+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.681+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:a000 2015-04-01T16:21:14.681+0000 D STORAGE [repl writer worker 15] DatabaseExistsOperationTests.DatabaseExistsOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.681+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905269000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.682+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.682+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1af5b5355f778169cff4') } 2015-04-01T16:21:14.683+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905269000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.683+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.683+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.683+0000 D COMMAND [repl writer worker 15] run command DatabaseExistsOperationTests.$cmd { dropDatabase: 1 } 2015-04-01T16:21:14.683+0000 I COMMAND [repl writer worker 15] dropDatabase DatabaseExistsOperationTests starting 2015-04-01T16:21:14.683+0000 D STORAGE [repl writer worker 15] dropDatabase DatabaseExistsOperationTests 2015-04-01T16:21:14.703+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.705+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.708+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.711+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.711+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.712+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.713+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\DatabaseExistsOperationTests.ns 2015-04-01T16:21:14.715+0000 I COMMAND [repl writer worker 15] dropDatabase DatabaseExistsOperationTests finished 2015-04-01T16:21:14.716+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.717+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.717+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:14.719+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905269000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.730+0000 D JOURNAL [journal writer] lsn set 70545 2015-04-01T16:21:14.738+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:14.741+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.002 secs 2015-04-01T16:21:14.743+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.743+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:14.744+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.744+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.744+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "DistinctOperationTests" } 2015-04-01T16:21:14.744+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.DistinctOperationTests {} 2015-04-01T16:21:14.744+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:14.744+0000 D STORAGE [repl writer worker 15] Tests04011620.DistinctOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.744+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.744+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:14.744+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.745+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:14.745+0000 D STORAGE [repl writer worker 15] Tests04011620.DistinctOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.745+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905269000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.745+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.746+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:14.746+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:14.746+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:14.746+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:14.746+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:14.746+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:14.746+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905269000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.746+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.747+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.747+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:14.747+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:14.747+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:14.766+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.767+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.770+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.774+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.774+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.774+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.775+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:14.776+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:14.777+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.777+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905269000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.777+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.777+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:14.788+0000 D JOURNAL [journal writer] lsn set 70605 2015-04-01T16:21:14.790+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:14.792+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.001 secs 2015-04-01T16:21:14.794+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.794+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:14.795+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.795+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.795+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "DropCollectionOperationTests" } 2015-04-01T16:21:14.795+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.DropCollectionOperationTests {} 2015-04-01T16:21:14.795+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:14.795+0000 D STORAGE [repl writer worker 15] Tests04011620.DropCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.795+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.795+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:14.795+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.795+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:14.796+0000 D STORAGE [repl writer worker 15] Tests04011620.DropCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.796+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905270000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.796+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.796+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.796+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "DropCollectionOperationTests" } 2015-04-01T16:21:14.796+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.DropCollectionOperationTests 2015-04-01T16:21:14.796+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.DropCollectionOperationTests 2015-04-01T16:21:14.796+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.DropCollectionOperationTests" } 2015-04-01T16:21:14.796+0000 D STORAGE [repl writer worker 15] Tests04011620.DropCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.796+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:14.797+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905270000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.797+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.797+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.797+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-DropDatabaseOperationTests.ns, filling with zeroes... 2015-04-01T16:21:14.805+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-DropDatabaseOperationTests.0, filling with zeroes... 2015-04-01T16:21:14.808+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-DropDatabaseOperationTests.0, size: 16MB, took 0.002 secs 2015-04-01T16:21:14.810+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.810+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:14.810+0000 D STORAGE [repl writer worker 15] Tests04011620-DropDatabaseOperationTests.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.810+0000 D STORAGE [repl writer worker 15] Tests04011620-DropDatabaseOperationTests.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.810+0000 D COMMAND [repl writer worker 15] run command Tests04011620-DropDatabaseOperationTests.$cmd { create: "test" } 2015-04-01T16:21:14.810+0000 D STORAGE [repl writer worker 15] create collection Tests04011620-DropDatabaseOperationTests.test {} 2015-04-01T16:21:14.811+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:6000 2015-04-01T16:21:14.811+0000 D STORAGE [repl writer worker 15] Tests04011620-DropDatabaseOperationTests.test: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.811+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.811+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:8000 2015-04-01T16:21:14.811+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.811+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:a000 2015-04-01T16:21:14.811+0000 D STORAGE [repl writer worker 15] Tests04011620-DropDatabaseOperationTests.test: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.811+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905270000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.811+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.812+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.812+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1af6b5355f778169cff5') } 2015-04-01T16:21:14.812+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905270000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.812+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.813+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.813+0000 D COMMAND [repl writer worker 15] run command Tests04011620-DropDatabaseOperationTests.$cmd { dropDatabase: 1 } 2015-04-01T16:21:14.813+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620-DropDatabaseOperationTests starting 2015-04-01T16:21:14.813+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620-DropDatabaseOperationTests 2015-04-01T16:21:14.838+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.840+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.841+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.847+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.847+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.847+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.848+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-DropDatabaseOperationTests.ns 2015-04-01T16:21:14.849+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620-DropDatabaseOperationTests finished 2015-04-01T16:21:14.850+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.850+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905270000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.850+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.851+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "DropIndexOperationTests" } 2015-04-01T16:21:14.851+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.DropIndexOperationTests {} 2015-04-01T16:21:14.851+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:14.851+0000 D STORAGE [repl writer worker 15] Tests04011620.DropIndexOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.851+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.851+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:14.851+0000 D STORAGE [repl writer worker 15] Tests04011620.DropIndexOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.851+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905270000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.852+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.852+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.852+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.852+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:29000 2015-04-01T16:21:14.853+0000 I INDEX [repl writer worker 15] build index on: Tests04011620.DropIndexOperationTests properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011620.DropIndexOperationTests" } 2015-04-01T16:21:14.853+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:14.853+0000 D STORAGE [repl writer worker 15] Tests04011620.DropIndexOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.853+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:14.853+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:14.853+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:14.853+0000 D STORAGE [repl writer worker 15] Tests04011620.DropIndexOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.853+0000 D STORAGE [repl writer worker 15] Tests04011620.DropIndexOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.853+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905270000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.853+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.854+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.854+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropIndexes: "DropIndexOperationTests", index: "x_1" } 2015-04-01T16:21:14.854+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011620.DropIndexOperationTests 2015-04-01T16:21:14.854+0000 D STORAGE [repl writer worker 15] Tests04011620.DropIndexOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.854+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905270000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.855+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.855+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.855+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:14.855+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:14.855+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:14.858+0000 D JOURNAL [journal writer] lsn set 70675 2015-04-01T16:21:14.874+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.874+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.877+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.881+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.881+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.881+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.882+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:14.883+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:14.883+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.884+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905270000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.884+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.885+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:14.898+0000 D JOURNAL [journal writer] lsn set 70715 2015-04-01T16:21:14.902+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:14.905+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.003 secs 2015-04-01T16:21:14.907+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.907+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:14.907+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.907+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.908+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "ExplainOperationTests" } 2015-04-01T16:21:14.908+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.ExplainOperationTests {} 2015-04-01T16:21:14.908+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:14.908+0000 D STORAGE [repl writer worker 15] Tests04011620.ExplainOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.908+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.908+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:14.908+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.908+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:14.908+0000 D STORAGE [repl writer worker 15] Tests04011620.ExplainOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.909+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.909+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905270000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.910+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1af6b5355f778169cff6') } 2015-04-01T16:21:14.910+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905270000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.910+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.910+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.911+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:14.911+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:14.911+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:14.930+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.932+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.934+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.936+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.936+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:14.936+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:14.937+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:14.938+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:14.939+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.940+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.940+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.940+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:14.951+0000 D JOURNAL [journal writer] lsn set 70765 2015-04-01T16:21:14.955+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:14.958+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.002 secs 2015-04-01T16:21:14.961+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.961+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:14.962+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.962+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.962+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "FindOneAndDeleteOperationTests" } 2015-04-01T16:21:14.962+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.FindOneAndDeleteOperationTests {} 2015-04-01T16:21:14.962+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:14.962+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndDeleteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.962+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.963+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:14.963+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.963+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:14.963+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndDeleteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.963+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.963+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.964+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:14.964+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:14.964+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:14.965+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.965+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.965+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.966+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "FindOneAndDeleteOperationTests" } 2015-04-01T16:21:14.966+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.FindOneAndDeleteOperationTests 2015-04-01T16:21:14.966+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.FindOneAndDeleteOperationTests 2015-04-01T16:21:14.966+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.FindOneAndDeleteOperationTests" } 2015-04-01T16:21:14.966+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndDeleteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.966+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:14.966+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.966+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.967+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.967+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "FindOneAndDeleteOperationTests" } 2015-04-01T16:21:14.967+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.FindOneAndDeleteOperationTests {} 2015-04-01T16:21:14.967+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:14.967+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndDeleteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.967+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:14.968+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:14.968+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndDeleteOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:14.968+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.968+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.969+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.969+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:14.969+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:14.970+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:14.971+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:14.971+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:14.971+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:14.971+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:14.994+0000 D REPL [rsBackgroundSync] bgsync buffer has 10597 bytes 2015-04-01T16:21:14.997+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:14.997+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.002+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.005+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.005+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.006+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.007+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:15.008+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:15.008+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.008+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.009+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.009+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:15.011+0000 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:15.011+0000 D COMMAND [conn14] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:15.011+0000 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:15.012+0000 D COMMAND [conn15] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:15.012+0000 D COMMAND [conn15] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:15.012+0000 I COMMAND [conn15] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:15.021+0000 D JOURNAL [journal writer] lsn set 70835 2015-04-01T16:21:15.025+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:15.029+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.004 secs 2015-04-01T16:21:15.031+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.031+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:15.032+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.032+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.032+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.032+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.FindOneAndReplaceOperationTests {} 2015-04-01T16:21:15.032+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:15.032+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.032+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.033+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:15.033+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.033+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:15.033+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.033+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.034+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.035+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.035+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.036+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.036+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.036+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.036+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.037+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.FindOneAndReplaceOperationTests 2015-04-01T16:21:15.037+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.FindOneAndReplaceOperationTests 2015-04-01T16:21:15.037+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.037+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.037+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:15.038+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.040+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.040+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.040+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.FindOneAndReplaceOperationTests {} 2015-04-01T16:21:15.040+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:15.040+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.040+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.040+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:15.040+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.040+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.041+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.041+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.042+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.042+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.042+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.043+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.043+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.043+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.FindOneAndReplaceOperationTests 2015-04-01T16:21:15.043+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.FindOneAndReplaceOperationTests 2015-04-01T16:21:15.043+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.043+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.043+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:15.043+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.044+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.044+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.044+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|14, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.044+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.044+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.FindOneAndReplaceOperationTests {} 2015-04-01T16:21:15.044+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:15.045+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.045+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.045+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:15.045+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.045+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.046+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.046+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.046+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.047+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.047+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.048+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.048+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.048+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.FindOneAndReplaceOperationTests 2015-04-01T16:21:15.048+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.FindOneAndReplaceOperationTests 2015-04-01T16:21:15.048+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.048+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.048+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:15.049+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.049+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.050+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.050+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.050+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.FindOneAndReplaceOperationTests {} 2015-04-01T16:21:15.050+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:15.050+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.050+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.050+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:15.050+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.050+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.051+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.051+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.051+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.052+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.052+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.052+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.052+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.052+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.FindOneAndReplaceOperationTests 2015-04-01T16:21:15.052+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.FindOneAndReplaceOperationTests 2015-04-01T16:21:15.052+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.052+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.053+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:15.053+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|20, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.053+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.053+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.053+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.053+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.FindOneAndReplaceOperationTests {} 2015-04-01T16:21:15.053+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:15.053+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.054+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.054+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:15.054+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.054+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.054+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.054+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:15.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.055+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|23, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.055+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.055+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.055+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.055+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.FindOneAndReplaceOperationTests 2015-04-01T16:21:15.055+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.FindOneAndReplaceOperationTests 2015-04-01T16:21:15.055+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.055+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.055+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:15.056+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|24, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.056+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.056+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.056+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "FindOneAndReplaceOperationTests" } 2015-04-01T16:21:15.056+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.FindOneAndReplaceOperationTests {} 2015-04-01T16:21:15.056+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:15.056+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.056+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.056+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:15.056+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndReplaceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.057+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|25, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.057+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.058+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:15.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.058+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|27, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.059+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.060+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.060+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:15.060+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:15.060+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:15.079+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.080+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.083+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.088+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.089+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.089+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.090+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:15.092+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:15.092+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.092+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.093+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:15.093+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905271000|28, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.103+0000 D JOURNAL [journal writer] lsn set 70905 2015-04-01T16:21:15.109+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:15.118+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.009 secs 2015-04-01T16:21:15.127+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.127+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:15.127+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.128+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.128+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.128+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.FindOneAndUpdateOperationTests {} 2015-04-01T16:21:15.128+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:15.128+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.128+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.128+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:15.128+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.128+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:15.128+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.129+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.129+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.129+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.130+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.130+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.130+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.130+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.FindOneAndUpdateOperationTests 2015-04-01T16:21:15.130+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.FindOneAndUpdateOperationTests 2015-04-01T16:21:15.130+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.130+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.130+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:15.131+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.131+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.131+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.131+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.FindOneAndUpdateOperationTests {} 2015-04-01T16:21:15.131+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:15.131+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.131+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.131+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:15.131+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.132+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.132+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.132+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.132+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.132+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.133+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.133+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.FindOneAndUpdateOperationTests 2015-04-01T16:21:15.133+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.FindOneAndUpdateOperationTests 2015-04-01T16:21:15.133+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.133+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.133+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:15.133+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.134+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.134+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.134+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.FindOneAndUpdateOperationTests {} 2015-04-01T16:21:15.134+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:15.134+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.134+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.134+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:15.134+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.135+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.135+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.135+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.135+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.135+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.136+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.136+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.FindOneAndUpdateOperationTests 2015-04-01T16:21:15.136+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.FindOneAndUpdateOperationTests 2015-04-01T16:21:15.136+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.136+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.136+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:15.136+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905272000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.138+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.138+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.138+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.138+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.FindOneAndUpdateOperationTests {} 2015-04-01T16:21:15.138+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:15.138+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.138+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.138+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:15.138+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.139+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.139+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.139+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.139+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.139+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.140+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.140+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.FindOneAndUpdateOperationTests 2015-04-01T16:21:15.140+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.FindOneAndUpdateOperationTests 2015-04-01T16:21:15.140+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.140+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.140+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:15.140+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.140+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.140+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.140+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.FindOneAndUpdateOperationTests {} 2015-04-01T16:21:15.141+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:15.141+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.141+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.141+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:15.141+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.141+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.142+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:15.142+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.142+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.142+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.142+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.142+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.142+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.FindOneAndUpdateOperationTests 2015-04-01T16:21:15.142+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.FindOneAndUpdateOperationTests 2015-04-01T16:21:15.142+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.143+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.143+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:15.143+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.143+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.143+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "FindOneAndUpdateOperationTests" } 2015-04-01T16:21:15.143+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.FindOneAndUpdateOperationTests {} 2015-04-01T16:21:15.143+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:15.143+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.144+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.144+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:15.144+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOneAndUpdateOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.144+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.144+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:15.144+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.144+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:15.145+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.145+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.145+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:15.145+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:15.145+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:15.145+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905272000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.153+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:15.153+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:21:15.154+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:15.154+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:17.154Z 2015-04-01T16:21:15.161+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.162+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.164+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.166+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.166+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.167+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.168+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:15.169+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:15.169+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.169+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.170+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905272000|20, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.171+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:15.179+0000 D JOURNAL [journal writer] lsn set 70985 2015-04-01T16:21:15.184+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:15.195+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.01 secs 2015-04-01T16:21:15.196+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.196+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:15.197+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.197+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.197+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "FindOperationTests" } 2015-04-01T16:21:15.197+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.FindOperationTests {} 2015-04-01T16:21:15.198+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:15.198+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.198+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.198+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:15.198+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.198+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:15.198+0000 D STORAGE [repl writer worker 15] Tests04011620.FindOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.198+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.199+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905272000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.199+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:15.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:15.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:15.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:15.200+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:15.200+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:15.200+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.200+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.200+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:15.200+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:15.200+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:15.201+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905272000|26, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.216+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.217+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.219+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.221+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.221+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.221+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.223+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:15.225+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:15.225+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.225+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905272000|27, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.225+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.226+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:15.236+0000 D JOURNAL [journal writer] lsn set 71035 2015-04-01T16:21:15.238+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:15.242+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.003 secs 2015-04-01T16:21:15.244+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.244+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:15.245+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.245+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.245+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "InsertOpcodeOperationTests" } 2015-04-01T16:21:15.245+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.InsertOpcodeOperationTests {} 2015-04-01T16:21:15.245+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:15.245+0000 D STORAGE [repl writer worker 15] Tests04011620.InsertOpcodeOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.245+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.245+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:15.245+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.246+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:15.246+0000 D STORAGE [repl writer worker 15] Tests04011620.InsertOpcodeOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.246+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905273000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.246+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.246+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.247+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:15.247+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905273000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.247+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.248+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.248+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "InsertOpcodeOperationTests" } 2015-04-01T16:21:15.248+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.InsertOpcodeOperationTests 2015-04-01T16:21:15.248+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.InsertOpcodeOperationTests 2015-04-01T16:21:15.248+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.InsertOpcodeOperationTests" } 2015-04-01T16:21:15.248+0000 D STORAGE [repl writer worker 15] Tests04011620.InsertOpcodeOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.248+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:15.248+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905273000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.248+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.249+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.249+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "InsertOpcodeOperationTests" } 2015-04-01T16:21:15.249+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.InsertOpcodeOperationTests {} 2015-04-01T16:21:15.249+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:15.249+0000 D STORAGE [repl writer worker 15] Tests04011620.InsertOpcodeOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.249+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.250+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:15.250+0000 D STORAGE [repl writer worker 15] Tests04011620.InsertOpcodeOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.250+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905273000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.250+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.251+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:15.251+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:15.251+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:15.251+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:15.251+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:15.252+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905273000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.252+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.253+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.253+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:15.253+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:15.253+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:15.270+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.271+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.272+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.274+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.274+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.275+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.276+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:15.277+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:15.277+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905273000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.278+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.278+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.279+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-ListCollectionsOperationTests.ns, filling with zeroes... 2015-04-01T16:21:15.291+0000 D JOURNAL [journal writer] lsn set 71085 2015-04-01T16:21:15.295+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-ListCollectionsOperationTests.0, filling with zeroes... 2015-04-01T16:21:15.298+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-ListCollectionsOperationTests.0, size: 16MB, took 0.002 secs 2015-04-01T16:21:15.299+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.299+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:15.300+0000 D STORAGE [repl writer worker 15] Tests04011620-ListCollectionsOperationTests.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.300+0000 D STORAGE [repl writer worker 15] Tests04011620-ListCollectionsOperationTests.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.300+0000 D COMMAND [repl writer worker 15] run command Tests04011620-ListCollectionsOperationTests.$cmd { create: "regular" } 2015-04-01T16:21:15.300+0000 D STORAGE [repl writer worker 15] create collection Tests04011620-ListCollectionsOperationTests.regular {} 2015-04-01T16:21:15.300+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:6000 2015-04-01T16:21:15.300+0000 D STORAGE [repl writer worker 15] Tests04011620-ListCollectionsOperationTests.regular: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.300+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.301+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:8000 2015-04-01T16:21:15.301+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.301+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:a000 2015-04-01T16:21:15.301+0000 D STORAGE [repl writer worker 15] Tests04011620-ListCollectionsOperationTests.regular: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.301+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.301+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905273000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.302+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.302+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.302+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:2a000 2015-04-01T16:21:15.303+0000 I INDEX [repl writer worker 15] build index on: Tests04011620-ListCollectionsOperationTests.regular properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011620-ListCollectionsOperationTests.regular" } 2015-04-01T16:21:15.303+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:15.303+0000 D STORAGE [repl writer worker 15] Tests04011620-ListCollectionsOperationTests.regular: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.303+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:15.303+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:15.303+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:15.303+0000 D STORAGE [repl writer worker 15] Tests04011620-ListCollectionsOperationTests.regular: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.303+0000 D STORAGE [repl writer worker 15] Tests04011620-ListCollectionsOperationTests.regular: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.303+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905273000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.303+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.304+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.304+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1af9b5355f778169cff7') } 2015-04-01T16:21:15.304+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905273000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.304+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.305+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.305+0000 D COMMAND [repl writer worker 15] run command Tests04011620-ListCollectionsOperationTests.$cmd { create: "capped", capped: true, size: 10000 } 2015-04-01T16:21:15.305+0000 D STORAGE [repl writer worker 15] create collection Tests04011620-ListCollectionsOperationTests.capped { capped: true, size: 10000 } 2015-04-01T16:21:15.305+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:10240 fromFreeList: 0 eloc: 0:4a000 2015-04-01T16:21:15.306+0000 D STORAGE [repl writer worker 15] Tests04011620-ListCollectionsOperationTests.capped: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.306+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.306+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:4d000 2015-04-01T16:21:15.306+0000 D STORAGE [repl writer worker 15] Tests04011620-ListCollectionsOperationTests.capped: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.306+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905273000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.306+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.308+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.308+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.308+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:6d000 2015-04-01T16:21:15.309+0000 I INDEX [repl writer worker 15] build index on: Tests04011620-ListCollectionsOperationTests.capped properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011620-ListCollectionsOperationTests.capped" } 2015-04-01T16:21:15.309+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:15.309+0000 D STORAGE [repl writer worker 15] Tests04011620-ListCollectionsOperationTests.capped: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.309+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:15.309+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:15.309+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:15.309+0000 D STORAGE [repl writer worker 15] Tests04011620-ListCollectionsOperationTests.capped: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.309+0000 D STORAGE [repl writer worker 15] Tests04011620-ListCollectionsOperationTests.capped: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.309+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905273000|14, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.309+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.313+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.313+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1af9b5355f778169cff8') } 2015-04-01T16:21:15.313+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905273000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.313+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.314+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.314+0000 D COMMAND [repl writer worker 15] run command Tests04011620-ListCollectionsOperationTests.$cmd { dropDatabase: 1 } 2015-04-01T16:21:15.314+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620-ListCollectionsOperationTests starting 2015-04-01T16:21:15.314+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620-ListCollectionsOperationTests 2015-04-01T16:21:15.340+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.341+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.343+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.345+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.346+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.346+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.347+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-ListCollectionsOperationTests.ns 2015-04-01T16:21:15.350+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620-ListCollectionsOperationTests finished 2015-04-01T16:21:15.351+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.351+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905273000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.351+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.352+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-ListDatabasesOperationTests.ns, filling with zeroes... 2015-04-01T16:21:15.361+0000 D JOURNAL [journal writer] lsn set 71155 2015-04-01T16:21:15.366+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-ListDatabasesOperationTests.0, filling with zeroes... 2015-04-01T16:21:15.373+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-ListDatabasesOperationTests.0, size: 16MB, took 0.006 secs 2015-04-01T16:21:15.374+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.375+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:15.375+0000 D STORAGE [repl writer worker 15] Tests04011620-ListDatabasesOperationTests.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.375+0000 D STORAGE [repl writer worker 15] Tests04011620-ListDatabasesOperationTests.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.375+0000 D COMMAND [repl writer worker 15] run command Tests04011620-ListDatabasesOperationTests.$cmd { create: "test" } 2015-04-01T16:21:15.375+0000 D STORAGE [repl writer worker 15] create collection Tests04011620-ListDatabasesOperationTests.test {} 2015-04-01T16:21:15.376+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:6000 2015-04-01T16:21:15.376+0000 D STORAGE [repl writer worker 15] Tests04011620-ListDatabasesOperationTests.test: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.376+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.376+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:8000 2015-04-01T16:21:15.376+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.376+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:a000 2015-04-01T16:21:15.376+0000 D STORAGE [repl writer worker 15] Tests04011620-ListDatabasesOperationTests.test: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.377+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.377+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905273000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.378+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.378+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1af9b5355f778169cff9') } 2015-04-01T16:21:15.378+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.378+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905273000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.380+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.380+0000 D COMMAND [repl writer worker 15] run command Tests04011620-ListDatabasesOperationTests.$cmd { dropDatabase: 1 } 2015-04-01T16:21:15.380+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620-ListDatabasesOperationTests starting 2015-04-01T16:21:15.381+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620-ListDatabasesOperationTests 2015-04-01T16:21:15.382+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:15.382+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:21:15.382+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:15.383+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:17.383Z 2015-04-01T16:21:15.394+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.395+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.397+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.400+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.400+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.400+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.401+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-ListDatabasesOperationTests.ns 2015-04-01T16:21:15.402+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620-ListDatabasesOperationTests finished 2015-04-01T16:21:15.402+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905274000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.402+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.403+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.404+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:15.414+0000 D JOURNAL [journal writer] lsn set 71205 2015-04-01T16:21:15.417+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:15.420+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.002 secs 2015-04-01T16:21:15.423+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.423+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:15.424+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.424+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.424+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "ListIndexesOperationTests" } 2015-04-01T16:21:15.424+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.ListIndexesOperationTests {} 2015-04-01T16:21:15.424+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:15.424+0000 D STORAGE [repl writer worker 15] Tests04011620.ListIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.425+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.425+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:15.425+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.425+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:15.425+0000 D STORAGE [repl writer worker 15] Tests04011620.ListIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.425+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.426+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905274000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.426+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.426+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:15.427+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905274000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.427+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.428+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.428+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { drop: "ListIndexesOperationTests" } 2015-04-01T16:21:15.428+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011620.ListIndexesOperationTests 2015-04-01T16:21:15.428+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620.ListIndexesOperationTests 2015-04-01T16:21:15.428+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620.ListIndexesOperationTests" } 2015-04-01T16:21:15.428+0000 D STORAGE [repl writer worker 15] Tests04011620.ListIndexesOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.429+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:15.429+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905274000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.430+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.430+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.430+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:15.430+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:15.431+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:15.448+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.449+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.452+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.454+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.454+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.455+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.456+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:15.458+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:15.458+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.458+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905274000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.459+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.459+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:15.470+0000 D JOURNAL [journal writer] lsn set 71255 2015-04-01T16:21:15.474+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:15.478+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.003 secs 2015-04-01T16:21:15.480+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.480+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:15.481+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.481+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.481+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "MapReduceOperationTests" } 2015-04-01T16:21:15.481+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.MapReduceOperationTests {} 2015-04-01T16:21:15.481+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:15.481+0000 D STORAGE [repl writer worker 15] Tests04011620.MapReduceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.481+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.481+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:15.481+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.481+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:15.481+0000 D STORAGE [repl writer worker 15] Tests04011620.MapReduceOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.482+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905274000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.482+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.482+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:15.482+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:15.483+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:15.483+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:15.483+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905274000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.483+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.484+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.484+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:15.484+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:15.484+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:15.505+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.507+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.509+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.512+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.513+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.513+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.514+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:15.515+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:15.515+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.516+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905274000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.516+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.517+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns, filling with zeroes... 2015-04-01T16:21:15.528+0000 D JOURNAL [journal writer] lsn set 71315 2015-04-01T16:21:15.533+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, filling with zeroes... 2015-04-01T16:21:15.541+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.0, size: 16MB, took 0.007 secs 2015-04-01T16:21:15.543+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.543+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:15.544+0000 D STORAGE [repl writer worker 15] Tests04011620.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.544+0000 D STORAGE [repl writer worker 15] Tests04011620.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.544+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "MapReduceOutputToCollectionOperationTests" } 2015-04-01T16:21:15.544+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.MapReduceOutputToCollectionOperationTests {} 2015-04-01T16:21:15.544+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:15.544+0000 D STORAGE [repl writer worker 15] Tests04011620.MapReduceOutputToCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.544+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.544+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:15.544+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.544+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:15.545+0000 D STORAGE [repl writer worker 15] Tests04011620.MapReduceOutputToCollectionOperationTests: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.545+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905274000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.545+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.548+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:15.548+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:15.548+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:15.549+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:15.549+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905274000|14, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.549+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.550+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.550+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { create: "tmp.mr.MapReduceOutputToCollectionOperationTests_0", temp: true } 2015-04-01T16:21:15.550+0000 D STORAGE [repl writer worker 15] create collection Tests04011620.tmp.mr.MapReduceOutputToCollectionOperationTests_0 { temp: true } 2015-04-01T16:21:15.550+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:29000 2015-04-01T16:21:15.550+0000 D STORAGE [repl writer worker 15] Tests04011620.tmp.mr.MapReduceOutputToCollectionOperationTests_0: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.550+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.550+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:2b000 2015-04-01T16:21:15.550+0000 D STORAGE [repl writer worker 15] Tests04011620.tmp.mr.MapReduceOutputToCollectionOperationTests_0: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.551+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905274000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.551+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.551+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:15.551+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1.0 } 2015-04-01T16:21:15.551+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2.0 } 2015-04-01T16:21:15.552+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.552+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905274000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.553+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.553+0000 D COMMAND [repl writer worker 15] run command admin.$cmd { renameCollection: "Tests04011620.tmp.mr.MapReduceOutputToCollectionOperationTests_0", to: "Tests04011620.Tests04011620.MapReduceOutputToCollectionOperationTestsOutput", stayTemp: false } 2015-04-01T16:21:15.553+0000 D COMMAND [repl writer worker 15] command: { renameCollection: "Tests04011620.tmp.mr.MapReduceOutputToCollectionOperationTests_0", to: "Tests04011620.Tests04011620.MapReduceOutputToCollectionOperationTestsOutput", stayTemp: false } 2015-04-01T16:21:15.553+0000 D STORAGE [repl writer worker 15] Tests04011620.Tests04011620.MapReduceOutputToCollectionOperationTestsOutput: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.554+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905274000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.554+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.554+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.554+0000 D COMMAND [repl writer worker 15] run command Tests04011620.$cmd { dropDatabase: 1 } 2015-04-01T16:21:15.555+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 starting 2015-04-01T16:21:15.555+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620 2015-04-01T16:21:15.586+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.587+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.590+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.594+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.594+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.594+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.596+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620.ns 2015-04-01T16:21:15.598+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620 finished 2015-04-01T16:21:15.598+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.598+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905274000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.599+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.599+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-RenameCollectionOperationTests.ns, filling with zeroes... 2015-04-01T16:21:15.609+0000 D JOURNAL [journal writer] lsn set 71395 2015-04-01T16:21:15.615+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-RenameCollectionOperationTests.0, filling with zeroes... 2015-04-01T16:21:15.621+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-RenameCollectionOperationTests.0, size: 16MB, took 0.005 secs 2015-04-01T16:21:15.622+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.622+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:15.623+0000 D STORAGE [repl writer worker 15] Tests04011620-RenameCollectionOperationTests.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.623+0000 D STORAGE [repl writer worker 15] Tests04011620-RenameCollectionOperationTests.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.623+0000 D COMMAND [repl writer worker 15] run command Tests04011620-RenameCollectionOperationTests.$cmd { create: "old" } 2015-04-01T16:21:15.623+0000 D STORAGE [repl writer worker 15] create collection Tests04011620-RenameCollectionOperationTests.old {} 2015-04-01T16:21:15.623+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:6000 2015-04-01T16:21:15.624+0000 D STORAGE [repl writer worker 15] Tests04011620-RenameCollectionOperationTests.old: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.624+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.624+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:8000 2015-04-01T16:21:15.624+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.624+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:a000 2015-04-01T16:21:15.624+0000 D STORAGE [repl writer worker 15] Tests04011620-RenameCollectionOperationTests.old: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.624+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905274000|20, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.624+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.625+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.625+0000 D COMMAND [repl writer worker 15] run command admin.$cmd { renameCollection: "Tests04011620-RenameCollectionOperationTests.old", to: "Tests04011620-RenameCollectionOperationTests.new" } 2015-04-01T16:21:15.625+0000 D COMMAND [repl writer worker 15] command: { renameCollection: "Tests04011620-RenameCollectionOperationTests.old", to: "Tests04011620-RenameCollectionOperationTests.new" } 2015-04-01T16:21:15.625+0000 D STORAGE [repl writer worker 15] Tests04011620-RenameCollectionOperationTests.new: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.625+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.625+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.626+0000 D COMMAND [repl writer worker 15] run command Tests04011620-RenameCollectionOperationTests.$cmd { create: "old" } 2015-04-01T16:21:15.626+0000 D STORAGE [repl writer worker 15] create collection Tests04011620-RenameCollectionOperationTests.old {} 2015-04-01T16:21:15.626+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:2a000 2015-04-01T16:21:15.626+0000 D STORAGE [repl writer worker 15] Tests04011620-RenameCollectionOperationTests.old: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.626+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.626+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:2c000 2015-04-01T16:21:15.626+0000 D STORAGE [repl writer worker 15] Tests04011620-RenameCollectionOperationTests.old: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.626+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.626+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.627+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905275000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.627+0000 D COMMAND [repl writer worker 15] run command admin.$cmd { renameCollection: "Tests04011620-RenameCollectionOperationTests.old", to: "Tests04011620-RenameCollectionOperationTests.new", dropTarget: true } 2015-04-01T16:21:15.628+0000 D COMMAND [repl writer worker 15] command: { renameCollection: "Tests04011620-RenameCollectionOperationTests.old", to: "Tests04011620-RenameCollectionOperationTests.new", dropTarget: true } 2015-04-01T16:21:15.628+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011620-RenameCollectionOperationTests.new 2015-04-01T16:21:15.628+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011620-RenameCollectionOperationTests.new" } 2015-04-01T16:21:15.628+0000 D STORAGE [repl writer worker 15] Tests04011620-RenameCollectionOperationTests.new: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.628+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:15.628+0000 D STORAGE [repl writer worker 15] Tests04011620-RenameCollectionOperationTests.new: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.628+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905275000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.628+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.629+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.629+0000 D COMMAND [repl writer worker 15] run command Tests04011620-RenameCollectionOperationTests.$cmd { create: "old" } 2015-04-01T16:21:15.629+0000 D STORAGE [repl writer worker 15] create collection Tests04011620-RenameCollectionOperationTests.old {} 2015-04-01T16:21:15.629+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6000 2015-04-01T16:21:15.629+0000 D STORAGE [repl writer worker 15] Tests04011620-RenameCollectionOperationTests.old: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.629+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:15.630+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:a000 2015-04-01T16:21:15.630+0000 D STORAGE [repl writer worker 15] Tests04011620-RenameCollectionOperationTests.old: clearing plan cache - collection info cache reset 2015-04-01T16:21:15.630+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905275000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.630+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:15.630+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:15.630+0000 D COMMAND [repl writer worker 15] run command Tests04011620-RenameCollectionOperationTests.$cmd { dropDatabase: 1 } 2015-04-01T16:21:15.630+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620-RenameCollectionOperationTests starting 2015-04-01T16:21:15.630+0000 D STORAGE [repl writer worker 15] dropDatabase Tests04011620-RenameCollectionOperationTests 2015-04-01T16:21:15.648+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.648+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.650+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.653+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:15.653+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:15.653+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:15.654+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011620-RenameCollectionOperationTests.ns 2015-04-01T16:21:15.655+0000 I COMMAND [repl writer worker 15] dropDatabase Tests04011620-RenameCollectionOperationTests finished 2015-04-01T16:21:15.655+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905275000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:15.756+0000 D JOURNAL [journal writer] lsn set 71445 2015-04-01T16:21:17.012+0000 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:17.012+0000 D COMMAND [conn14] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:17.012+0000 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:17.013+0000 D COMMAND [conn15] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:17.013+0000 D COMMAND [conn15] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:17.013+0000 I COMMAND [conn15] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:17.154+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:17.155+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:17.155+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:19.155Z 2015-04-01T16:21:17.383+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:17.383+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:17.383+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:19.383Z 2015-04-01T16:21:17.708+0000 D NETWORK [conn13] SocketException: remote: 127.0.0.1:62972 error: 9001 socket exception [CLOSED] server [127.0.0.1:62972] 2015-04-01T16:21:17.708+0000 I NETWORK [conn13] end connection 127.0.0.1:62972 (2 connections now open) 2015-04-01T16:21:19.012+0000 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:19.012+0000 D COMMAND [conn14] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:19.012+0000 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:19.013+0000 D COMMAND [conn15] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:19.013+0000 D COMMAND [conn15] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:19.013+0000 I COMMAND [conn15] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:19.155+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:19.156+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:19.156+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:21.156Z 2015-04-01T16:21:19.383+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:19.383+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:19.383+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:21.383Z 2015-04-01T16:21:20.621+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62982 #16 (3 connections now open) 2015-04-01T16:21:20.808+0000 W NETWORK [conn16] no SSL certificate provided by peer 2015-04-01T16:21:21.013+0000 D COMMAND [conn15] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:21.013+0000 D COMMAND [conn15] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:21.013+0000 I COMMAND [conn15] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:21.063+0000 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:21.063+0000 D COMMAND [conn14] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:21.063+0000 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:21.069+0000 D QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:21.070+0000 D COMMAND [conn16] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:21:21.070+0000 I COMMAND [conn16] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:21:21.179+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:21.179+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:21:21.179+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:21.179+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:23.179Z 2015-04-01T16:21:21.295+0000 D COMMAND [conn16] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:21:21.295+0000 I COMMAND [conn16] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:21:21.354+0000 D COMMAND [conn16] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D6C2E27564C223831783F44226C214C6226626C51) } 2015-04-01T16:21:21.354+0000 D QUERY [conn16] Relevant index 0 is kp: { user: 1, db: 1 } io: { v: 1, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" } 2015-04-01T16:21:21.354+0000 D QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: { user: "bob", db: "admin" } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { user: 1, db: 1 } 2015-04-01T16:21:21.355+0000 I COMMAND [conn16] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D6C2E27564C223831783F44226C214C6226626C51) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:167 locks:{} 0ms 2015-04-01T16:21:21.386+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:21.386+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:21:21.387+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:21.387+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:23.387Z 2015-04-01T16:21:21.512+0000 D COMMAND [conn16] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D6C2E27564C223831783F44226C214C6226626C5158636A504A387370474437306E32536A6D3375467A3738624B476346467475632C703D4B35716E3643...) } 2015-04-01T16:21:21.512+0000 I COMMAND [conn16] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D6C2E27564C223831783F44226C214C6226626C5158636A504A387370474437306E32536A6D3375467A3738624B476346467475632C703D4B35716E3643...) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:21:21.521+0000 D COMMAND [conn16] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } 2015-04-01T16:21:21.522+0000 D QUERY [conn16] Relevant index 0 is kp: { user: 1, db: 1 } io: { v: 1, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" } 2015-04-01T16:21:21.522+0000 D QUERY [conn16] Only one plan is available; it will be run but will not be cached. query: { user: "bob", db: "admin" } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { user: 1, db: 1 } 2015-04-01T16:21:21.522+0000 I ACCESS [conn16] Successfully authenticated as principal bob on admin 2015-04-01T16:21:21.523+0000 I COMMAND [conn16] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 1ms 2015-04-01T16:21:21.547+0000 D COMMAND [conn16] run command admin.$cmd { getLastError: 1 } 2015-04-01T16:21:21.547+0000 I COMMAND [conn16] command admin.$cmd command: getLastError { getLastError: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:110 locks:{} 0ms 2015-04-01T16:21:21.549+0000 D COMMAND [conn16] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:21:21.550+0000 I COMMAND [conn16] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 1ms 2015-04-01T16:21:21.551+0000 D COMMAND [conn16] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:21:21.551+0000 I COMMAND [conn16] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:21:21.789+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:21.789+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:21.790+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011621.ns, filling with zeroes... 2015-04-01T16:21:21.819+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011621.0, filling with zeroes... 2015-04-01T16:21:21.827+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011621.0, size: 16MB, took 0.007 secs 2015-04-01T16:21:21.836+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:21.836+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:21.837+0000 D STORAGE [repl writer worker 15] Tests04011621.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:21.837+0000 D STORAGE [repl writer worker 15] Tests04011621.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:21.837+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "test_meta_text" } 2015-04-01T16:21:21.837+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.test_meta_text {} 2015-04-01T16:21:21.837+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:21.837+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:21.837+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:21.837+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:21.837+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:21.837+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:21.837+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:21.838+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:21.839+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:21.840+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:21.840+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:29000 2015-04-01T16:21:21.840+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.test_meta_text properties: { v: 1, key: { _fts: "text", _ftsx: 1 }, name: "textfield_text", ns: "Tests04011621.test_meta_text", weights: { textfield: 1 }, default_language: "english", language_override: "language", textIndexVersion: 2 } 2015-04-01T16:21:21.840+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:21.840+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:21.840+0000 D INDEX [repl writer worker 15] bulk commit starting for index: textfield_text 2015-04-01T16:21:21.840+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:21.840+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:21.840+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:21.840+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:21.841+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905281000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:21.882+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:21.883+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:21.883+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:21.883+0000 D INDEX [repl writer worker 15] Tests04011621.test_meta_text: clearing plan cache - index { _fts: "text", _ftsx: 1 } set to multi key. 2015-04-01T16:21:21.884+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905281000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:21.924+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:21.924+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:21.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:21.925+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905281000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.208+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.208+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.208+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "test_meta_text" } 2015-04-01T16:21:22.209+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.test_meta_text 2015-04-01T16:21:22.209+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.test_meta_text 2015-04-01T16:21:22.209+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.test_meta_text" } 2015-04-01T16:21:22.209+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.209+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _fts: "text", _ftsx: 1 }, name: "textfield_text", ns: "Tests04011621.test_meta_text", weights: { textfield: 1 }, default_language: "english", language_override: "language", textIndexVersion: 2 } 2015-04-01T16:21:22.209+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.209+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:22.211+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.213+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.213+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.214+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "test_meta_text" } 2015-04-01T16:21:22.215+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.test_meta_text {} 2015-04-01T16:21:22.215+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:5000 2015-04-01T16:21:22.215+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.215+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.215+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:29000 2015-04-01T16:21:22.215+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.215+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.216+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.217+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.217+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.217+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:9000 2015-04-01T16:21:22.217+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.test_meta_text properties: { v: 1, key: { _fts: "text", _ftsx: 1 }, name: "textfield_text", ns: "Tests04011621.test_meta_text", weights: { textfield: 1 }, default_language: "english", language_override: "language", textIndexVersion: 2 } 2015-04-01T16:21:22.217+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:22.217+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.217+0000 D INDEX [repl writer worker 15] bulk commit starting for index: textfield_text 2015-04-01T16:21:22.218+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:22.218+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:22.218+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.218+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.218+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.333+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.333+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.333+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:22.334+0000 D INDEX [repl writer worker 15] Tests04011621.test_meta_text: clearing plan cache - index { _fts: "text", _ftsx: 1 } set to multi key. 2015-04-01T16:21:22.335+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.336+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.337+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.337+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:22.337+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.447+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:22.448+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.448+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.448+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "test_text" } 2015-04-01T16:21:22.448+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.test_text {} 2015-04-01T16:21:22.448+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:49000 2015-04-01T16:21:22.448+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.449+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.449+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:4b000 2015-04-01T16:21:22.449+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.450+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.451+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.451+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.452+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.452+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:6b000 2015-04-01T16:21:22.452+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.test_text properties: { v: 1, key: { _fts: "text", _ftsx: 1, c: 1 }, name: "custom", ns: "Tests04011621.test_text", language_override: "idioma", default_language: "spanish", weights: { a: 1, b: 1 }, textIndexVersion: 2 } 2015-04-01T16:21:22.452+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:22.452+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.452+0000 D INDEX [repl writer worker 15] bulk commit starting for index: custom 2015-04-01T16:21:22.452+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:22.452+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:22.452+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.452+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.454+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.595+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.595+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.595+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "test_text" } 2015-04-01T16:21:22.595+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.test_text 2015-04-01T16:21:22.595+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.test_text 2015-04-01T16:21:22.596+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.test_text" } 2015-04-01T16:21:22.596+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.596+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _fts: "text", _ftsx: 1, c: 1 }, name: "custom", ns: "Tests04011621.test_text", language_override: "idioma", default_language: "spanish", weights: { a: 1, b: 1 }, textIndexVersion: 2 } 2015-04-01T16:21:22.596+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.596+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:22.596+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.598+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.599+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.599+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "test_text" } 2015-04-01T16:21:22.599+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.test_text {} 2015-04-01T16:21:22.599+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:49000 2015-04-01T16:21:22.599+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.599+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.600+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6b000 2015-04-01T16:21:22.600+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.600+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.602+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.603+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.604+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.604+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:4b000 2015-04-01T16:21:22.604+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.test_text properties: { v: 1, key: { _fts: "text", _ftsx: 1, c: 1 }, name: "custom", ns: "Tests04011621.test_text", language_override: "idioma", default_language: "spanish", weights: { a: 1, b: 1 }, textIndexVersion: 2 } 2015-04-01T16:21:22.604+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:22.604+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.604+0000 D INDEX [repl writer worker 15] bulk commit starting for index: custom 2015-04-01T16:21:22.604+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:22.604+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:22.604+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.604+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.605+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.727+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.727+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.728+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "geo" } 2015-04-01T16:21:22.728+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.geo {} 2015-04-01T16:21:22.728+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:8b000 2015-04-01T16:21:22.728+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.728+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.729+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:8d000 2015-04-01T16:21:22.729+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.729+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.731+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.731+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.731+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.731+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:ad000 2015-04-01T16:21:22.733+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.geo properties: { v: 1, key: { loc: "2dsphere" }, name: "loc_2dsphere", ns: "Tests04011621.geo", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:22.733+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:22.733+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.733+0000 D INDEX [repl writer worker 15] bulk commit starting for index: loc_2dsphere 2015-04-01T16:21:22.733+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:22.733+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:22.733+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.733+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.734+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.735+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.735+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.735+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.735+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:cd000 2015-04-01T16:21:22.735+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.geo properties: { v: 1, key: { sur: "2dsphere" }, name: "sur_2dsphere", ns: "Tests04011621.geo", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:22.735+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:22.735+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.736+0000 D INDEX [repl writer worker 15] bulk commit starting for index: sur_2dsphere 2015-04-01T16:21:22.736+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:22.736+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:22.736+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.736+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.736+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.811+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.818+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.821+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:22.825+0000 D INDEX [repl writer worker 15] Tests04011621.geo: clearing plan cache - index { sur: "2dsphere" } set to multi key. 2015-04-01T16:21:22.826+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|14, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.845+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.846+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.846+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "geo" } 2015-04-01T16:21:22.846+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.geo 2015-04-01T16:21:22.846+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.geo 2015-04-01T16:21:22.846+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.geo" } 2015-04-01T16:21:22.846+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.846+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { loc: "2dsphere" }, name: "loc_2dsphere", ns: "Tests04011621.geo", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:22.846+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.846+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { sur: "2dsphere" }, name: "sur_2dsphere", ns: "Tests04011621.geo", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:22.846+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.847+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:22.847+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.849+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.850+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.851+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "geo" } 2015-04-01T16:21:22.851+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.geo {} 2015-04-01T16:21:22.851+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:8b000 2015-04-01T16:21:22.851+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.851+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.851+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:cd000 2015-04-01T16:21:22.851+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.853+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.853+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.854+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.854+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.854+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ad000 2015-04-01T16:21:22.855+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.geo properties: { v: 1, key: { loc: "2dsphere" }, name: "loc_2dsphere", ns: "Tests04011621.geo", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:22.855+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:22.855+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.855+0000 D INDEX [repl writer worker 15] bulk commit starting for index: loc_2dsphere 2015-04-01T16:21:22.855+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:22.855+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:22.855+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.855+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.855+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.860+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.860+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.860+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:8d000 2015-04-01T16:21:22.860+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.geo properties: { v: 1, key: { sur: "2dsphere" }, name: "sur_2dsphere", ns: "Tests04011621.geo", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:22.860+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:22.861+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.861+0000 D INDEX [repl writer worker 15] bulk commit starting for index: sur_2dsphere 2015-04-01T16:21:22.861+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:22.861+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:22.861+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.861+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.861+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.861+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.868+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.868+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:22.874+0000 D REPL [rsBackgroundSync] bgsync buffer has 94 bytes 2015-04-01T16:21:22.874+0000 D INDEX [repl writer worker 15] Tests04011621.geo: clearing plan cache - index { sur: "2dsphere" } set to multi key. 2015-04-01T16:21:22.874+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.874+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.875+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.875+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "geo" } 2015-04-01T16:21:22.875+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.geo 2015-04-01T16:21:22.875+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.geo 2015-04-01T16:21:22.875+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.geo" } 2015-04-01T16:21:22.875+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.876+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { loc: "2dsphere" }, name: "loc_2dsphere", ns: "Tests04011621.geo", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:22.876+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.876+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { sur: "2dsphere" }, name: "sur_2dsphere", ns: "Tests04011621.geo", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:22.876+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.876+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:22.877+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|20, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.878+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.879+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.879+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "geo" } 2015-04-01T16:21:22.879+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.geo {} 2015-04-01T16:21:22.880+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:8b000 2015-04-01T16:21:22.880+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.880+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.880+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:8d000 2015-04-01T16:21:22.880+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.881+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.881+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.882+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.882+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.882+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ad000 2015-04-01T16:21:22.882+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.geo properties: { v: 1, key: { loc: "2dsphere" }, name: "loc_2dsphere", ns: "Tests04011621.geo", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:22.882+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:22.882+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.882+0000 D INDEX [repl writer worker 15] bulk commit starting for index: loc_2dsphere 2015-04-01T16:21:22.882+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:22.882+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:22.882+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.882+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.883+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.883+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.883+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.884+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.884+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:cd000 2015-04-01T16:21:22.884+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.geo properties: { v: 1, key: { sur: "2dsphere" }, name: "sur_2dsphere", ns: "Tests04011621.geo", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:22.884+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:22.884+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.884+0000 D INDEX [repl writer worker 15] bulk commit starting for index: sur_2dsphere 2015-04-01T16:21:22.884+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:22.884+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:22.884+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.885+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.885+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|23, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.885+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.889+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.889+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:22.893+0000 D INDEX [repl writer worker 15] Tests04011621.geo: clearing plan cache - index { sur: "2dsphere" } set to multi key. 2015-04-01T16:21:22.893+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|24, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.900+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.900+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.901+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "geo" } 2015-04-01T16:21:22.901+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.geo 2015-04-01T16:21:22.901+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.geo 2015-04-01T16:21:22.901+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.geo" } 2015-04-01T16:21:22.901+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.901+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { loc: "2dsphere" }, name: "loc_2dsphere", ns: "Tests04011621.geo", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:22.901+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.901+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { sur: "2dsphere" }, name: "sur_2dsphere", ns: "Tests04011621.geo", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:22.901+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.901+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:22.902+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|25, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.903+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.904+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.904+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "geo" } 2015-04-01T16:21:22.904+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.geo {} 2015-04-01T16:21:22.904+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:8b000 2015-04-01T16:21:22.904+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.904+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.904+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:cd000 2015-04-01T16:21:22.904+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.905+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.905+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|26, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.907+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.908+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.908+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ad000 2015-04-01T16:21:22.908+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.geo properties: { v: 1, key: { loc: "2dsphere" }, name: "loc_2dsphere", ns: "Tests04011621.geo", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:22.908+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:22.908+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.908+0000 D INDEX [repl writer worker 15] bulk commit starting for index: loc_2dsphere 2015-04-01T16:21:22.908+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:22.908+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:22.908+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.908+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.909+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|27, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.909+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.909+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.910+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:22.910+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:8d000 2015-04-01T16:21:22.910+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.geo properties: { v: 1, key: { sur: "2dsphere" }, name: "sur_2dsphere", ns: "Tests04011621.geo", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:22.910+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:22.910+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.910+0000 D INDEX [repl writer worker 15] bulk commit starting for index: sur_2dsphere 2015-04-01T16:21:22.910+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:22.910+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:22.910+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.910+0000 D STORAGE [repl writer worker 15] Tests04011621.geo: clearing plan cache - collection info cache reset 2015-04-01T16:21:22.910+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|28, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:22.911+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:22.915+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:22.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:22.919+0000 D INDEX [repl writer worker 15] Tests04011621.geo: clearing plan cache - index { sur: "2dsphere" } set to multi key. 2015-04-01T16:21:22.919+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905282000|29, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.014+0000 D COMMAND [conn15] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:23.014+0000 D COMMAND [conn15] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:23.014+0000 I COMMAND [conn15] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:23.051+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.051+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.051+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:23.051+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:23.052+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:ed000 2015-04-01T16:21:23.052+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.052+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.052+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:ef000 2015-04-01T16:21:23.052+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.052+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.053+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.053+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.054+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.054+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:10f000 2015-04-01T16:21:23.054+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { loc: "2d" }, name: "loc_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:23.054+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:23.054+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.054+0000 D INDEX [repl writer worker 15] bulk commit starting for index: loc_2d 2015-04-01T16:21:23.054+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:23.054+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:23.055+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.055+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.055+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.055+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.055+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.055+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:23.056+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.059+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.059+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.059+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:23.059+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.065+0000 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:23.065+0000 D COMMAND [conn14] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:23.065+0000 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:23.080+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.080+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.081+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:23.081+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:23.081+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:23.081+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:23.081+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.081+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { loc: "2d" }, name: "loc_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:23.081+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.081+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:23.082+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.083+0000 D REPL [rsBackgroundSync] bgsync buffer has 107 bytes 2015-04-01T16:21:23.084+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.084+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.084+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:23.084+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:23.084+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:23.085+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.085+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.085+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:10f000 2015-04-01T16:21:23.085+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.086+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.086+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.089+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.089+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.089+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:23.089+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { loc: "2d" }, name: "loc_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:23.089+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:23.089+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.089+0000 D INDEX [repl writer worker 15] bulk commit starting for index: loc_2d 2015-04-01T16:21:23.089+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:23.089+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:23.089+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.089+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.090+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.090+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.091+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:23.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:23.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:23.091+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.096+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.097+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.097+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:23.097+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:23.097+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:23.097+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:23.097+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.098+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { loc: "2d" }, name: "loc_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:23.098+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.098+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:23.099+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.100+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.100+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.101+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:23.101+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:23.101+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:23.101+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.101+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.101+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:23.101+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.102+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.104+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.104+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.104+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.104+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:10f000 2015-04-01T16:21:23.104+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { loc: "2dsphere" }, name: "loc_2dsphere", ns: "Tests04011621.testcollection", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:23.105+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:23.105+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.105+0000 D INDEX [repl writer worker 15] bulk commit starting for index: loc_2dsphere 2015-04-01T16:21:23.105+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:23.105+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:23.105+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.105+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.105+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.107+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.107+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:23.107+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:23.108+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:23.108+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|14, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.112+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.112+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.112+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:23.113+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:23.113+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:23.113+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:23.113+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.113+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { loc: "2dsphere" }, name: "loc_2dsphere", ns: "Tests04011621.testcollection", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:23.113+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.113+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:23.114+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.116+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.117+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.117+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:23.117+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:23.118+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:23.118+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.118+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.118+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:10f000 2015-04-01T16:21:23.118+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.119+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.121+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.121+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.121+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.121+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:23.121+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { loc: "2dsphere" }, name: "loc_2dsphere", ns: "Tests04011621.testcollection", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:23.121+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:23.121+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.122+0000 D INDEX [repl writer worker 15] bulk commit starting for index: loc_2dsphere 2015-04-01T16:21:23.122+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:23.122+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:23.122+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.122+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.123+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.123+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.123+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:23.123+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:23.123+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:23.124+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.127+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.127+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.127+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:23.127+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:23.128+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:23.128+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:23.128+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.128+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { loc: "2dsphere" }, name: "loc_2dsphere", ns: "Tests04011621.testcollection", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:23.128+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.128+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:23.128+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|20, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.130+0000 D REPL [rsBackgroundSync] bgsync buffer has 107 bytes 2015-04-01T16:21:23.130+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.130+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.131+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:23.131+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:23.131+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:23.131+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.131+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.131+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:23.132+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.133+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.133+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.135+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.135+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.135+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:10f000 2015-04-01T16:21:23.136+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { loc: "2d" }, name: "loc_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:23.136+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:23.136+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.136+0000 D INDEX [repl writer worker 15] bulk commit starting for index: loc_2d 2015-04-01T16:21:23.136+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:23.136+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:23.136+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.136+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.136+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.137+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.137+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:23.137+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:23.138+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:23.138+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|24, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.142+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.142+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.142+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:23.142+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:23.142+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:23.143+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:23.143+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.143+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { loc: "2d" }, name: "loc_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:23.143+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.143+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:23.144+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|25, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.144+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.145+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.146+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:23.146+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:23.146+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:23.146+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.146+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.146+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:10f000 2015-04-01T16:21:23.146+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.147+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.147+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|26, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.148+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.149+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.149+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:23.149+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { loc: "2d" }, name: "loc_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:23.149+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:23.149+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.149+0000 D INDEX [repl writer worker 15] bulk commit starting for index: loc_2d 2015-04-01T16:21:23.150+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:23.150+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:23.150+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.150+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.150+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|27, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.151+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.151+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:23.151+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:23.152+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:23.152+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|29, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.180+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:23.180+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:23.180+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:25.180Z 2015-04-01T16:21:23.217+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.217+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.218+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "test_text" } 2015-04-01T16:21:23.218+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.test_text 2015-04-01T16:21:23.218+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.test_text 2015-04-01T16:21:23.218+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.test_text" } 2015-04-01T16:21:23.218+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.218+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _fts: "text", _ftsx: 1, c: 1 }, name: "custom", ns: "Tests04011621.test_text", language_override: "idioma", default_language: "spanish", weights: { a: 1, b: 1 }, textIndexVersion: 2 } 2015-04-01T16:21:23.218+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.218+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:23.219+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|30, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.221+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.221+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.221+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "test_text" } 2015-04-01T16:21:23.221+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.test_text {} 2015-04-01T16:21:23.221+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:49000 2015-04-01T16:21:23.221+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.221+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.222+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:4b000 2015-04-01T16:21:23.222+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.222+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|31, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.223+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.223+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.223+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.223+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6b000 2015-04-01T16:21:23.223+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.test_text properties: { v: 1, key: { _fts: "text", _ftsx: 1 }, name: "textfield_text", ns: "Tests04011621.test_text", weights: { textfield: 1 }, default_language: "english", language_override: "language", textIndexVersion: 2 } 2015-04-01T16:21:23.223+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:23.223+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.224+0000 D INDEX [repl writer worker 15] bulk commit starting for index: textfield_text 2015-04-01T16:21:23.224+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:23.224+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:23.224+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.224+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.224+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|32, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.224+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.224+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.225+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:23.225+0000 D INDEX [repl writer worker 15] Tests04011621.test_text: clearing plan cache - index { _fts: "text", _ftsx: 1 } set to multi key. 2015-04-01T16:21:23.225+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|33, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.227+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.227+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.227+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:23.228+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|34, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.238+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.238+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.239+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "test_text_spanish" } 2015-04-01T16:21:23.239+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.test_text_spanish {} 2015-04-01T16:21:23.239+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:12f000 2015-04-01T16:21:23.239+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text_spanish: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.239+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.239+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:131000 2015-04-01T16:21:23.239+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text_spanish: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.240+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|35, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.241+0000 D REPL [rsBackgroundSync] bgsync buffer has 215 bytes 2015-04-01T16:21:23.241+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.242+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.242+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.242+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:151000 2015-04-01T16:21:23.243+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.test_text_spanish properties: { v: 1, key: { _fts: "text", _ftsx: 1 }, name: "textfield_text", ns: "Tests04011621.test_text_spanish", default_language: "spanish", weights: { textfield: 1 }, language_override: "language", textIndexVersion: 2 } 2015-04-01T16:21:23.243+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:23.243+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text_spanish: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.243+0000 D INDEX [repl writer worker 15] bulk commit starting for index: textfield_text 2015-04-01T16:21:23.243+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:23.243+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:23.243+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text_spanish: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.243+0000 D STORAGE [repl writer worker 15] Tests04011621.test_text_spanish: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.245+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|36, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.245+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.246+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:23.247+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:23.247+0000 D INDEX [repl writer worker 15] Tests04011621.test_text_spanish: clearing plan cache - index { _fts: "text", _ftsx: 1 } set to multi key. 2015-04-01T16:21:23.247+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:23.247+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|38, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.387+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:23.388+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:23.388+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:25.388Z 2015-04-01T16:21:23.761+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.763+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.763+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "test_meta_text_sort" } 2015-04-01T16:21:23.763+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.test_meta_text_sort {} 2015-04-01T16:21:23.764+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:171000 2015-04-01T16:21:23.764+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text_sort: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.764+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.764+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:173000 2015-04-01T16:21:23.765+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text_sort: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.765+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.765+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|39, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.766+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.767+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.768+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:193000 2015-04-01T16:21:23.768+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.test_meta_text_sort properties: { v: 1, key: { _fts: "text", _ftsx: 1 }, name: "textfield_text", ns: "Tests04011621.test_meta_text_sort", weights: { textfield: 1 }, default_language: "english", language_override: "language", textIndexVersion: 2 } 2015-04-01T16:21:23.768+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:23.768+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text_sort: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.768+0000 D INDEX [repl writer worker 15] bulk commit starting for index: textfield_text 2015-04-01T16:21:23.768+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:23.768+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:23.768+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text_sort: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.768+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text_sort: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.769+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|40, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.769+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.769+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:23.770+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:23.770+0000 D INDEX [repl writer worker 15] Tests04011621.test_meta_text_sort: clearing plan cache - index { _fts: "text", _ftsx: 1 } set to multi key. 2015-04-01T16:21:23.770+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:23.770+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:23.770+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:23.770+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|44, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.815+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.816+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.816+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "test_meta_text_sort" } 2015-04-01T16:21:23.816+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.test_meta_text_sort 2015-04-01T16:21:23.816+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.test_meta_text_sort 2015-04-01T16:21:23.816+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.test_meta_text_sort" } 2015-04-01T16:21:23.816+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text_sort: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.816+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _fts: "text", _ftsx: 1 }, name: "textfield_text", ns: "Tests04011621.test_meta_text_sort", weights: { textfield: 1 }, default_language: "english", language_override: "language", textIndexVersion: 2 } 2015-04-01T16:21:23.816+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text_sort: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.816+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:23.817+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|45, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.818+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.819+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.819+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "test_meta_text_sort" } 2015-04-01T16:21:23.819+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.test_meta_text_sort {} 2015-04-01T16:21:23.819+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:171000 2015-04-01T16:21:23.819+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text_sort: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.819+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.819+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:193000 2015-04-01T16:21:23.819+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text_sort: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.820+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|46, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.820+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.821+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.822+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:23.822+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:173000 2015-04-01T16:21:23.822+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.test_meta_text_sort properties: { v: 1, key: { _fts: "text", _ftsx: 1 }, name: "textfield_text", ns: "Tests04011621.test_meta_text_sort", weights: { textfield: 1 }, default_language: "english", language_override: "language", textIndexVersion: 2 } 2015-04-01T16:21:23.822+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:23.822+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text_sort: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.822+0000 D INDEX [repl writer worker 15] bulk commit starting for index: textfield_text 2015-04-01T16:21:23.822+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:23.822+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:23.822+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text_sort: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.822+0000 D STORAGE [repl writer worker 15] Tests04011621.test_meta_text_sort: clearing plan cache - collection info cache reset 2015-04-01T16:21:23.823+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|47, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.828+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.829+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.829+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:23.829+0000 D INDEX [repl writer worker 15] Tests04011621.test_meta_text_sort: clearing plan cache - index { _fts: "text", _ftsx: 1 } set to multi key. 2015-04-01T16:21:23.829+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|48, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.831+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.832+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:23.832+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:23.832+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:23.833+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|50, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:23.834+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:23.835+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:23.835+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:23.836+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905283000|51, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.197+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:24.198+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.198+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.198+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:24.198+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:24.199+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:24.199+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:24.199+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.199+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { loc: "2d" }, name: "loc_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:24.199+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.199+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:24.199+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.201+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.201+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.201+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:24.201+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:24.202+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:24.202+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.202+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:24.202+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:24.202+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.202+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.203+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.204+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:24.205+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.390+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62986 #17 (4 connections now open) 2015-04-01T16:21:24.428+0000 D QUERY [conn17] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.428+0000 D COMMAND [conn17] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:21:24.429+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 1ms 2015-04-01T16:21:24.430+0000 D COMMAND [conn17] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:21:24.430+0000 I COMMAND [conn17] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:21:24.431+0000 D COMMAND [conn17] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D342D30343F464536344346295121644B3322682E) } 2015-04-01T16:21:24.431+0000 I COMMAND [conn17] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D342D30343F464536344346295121644B3322682E) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:167 locks:{} 0ms 2015-04-01T16:21:24.521+0000 D COMMAND [conn17] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D342D30343F464536344346295121644B3322682E367A57397A2B6A2B4D757374796C304C7833452B3876772B6449766948746C372C703D7A516E683455...) } 2015-04-01T16:21:24.521+0000 I COMMAND [conn17] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D342D30343F464536344346295121644B3322682E367A57397A2B6A2B4D757374796C304C7833452B3876772B6449766948746C372C703D7A516E683455...) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:21:24.521+0000 D COMMAND [conn17] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } 2015-04-01T16:21:24.521+0000 I ACCESS [conn17] Successfully authenticated as principal bob on admin 2015-04-01T16:21:24.522+0000 I COMMAND [conn17] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:21:24.522+0000 D COMMAND [conn17] run command admin.$cmd { getLastError: 1 } 2015-04-01T16:21:24.522+0000 I COMMAND [conn17] command admin.$cmd command: getLastError { getLastError: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:110 locks:{} 0ms 2015-04-01T16:21:24.527+0000 D COMMAND [conn17] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:21:24.527+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:21:24.528+0000 D COMMAND [conn17] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:21:24.528+0000 I COMMAND [conn17] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:21:24.647+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.648+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.648+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:24.648+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:24.648+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:24.648+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:24.648+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.648+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:24.649+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.657+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.658+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.658+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:24.658+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:24.658+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:24.658+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.658+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:24.658+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:24.658+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.659+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.660+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.661+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be23') } 2015-04-01T16:21:24.662+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.725+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.726+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.726+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be24') } 2015-04-01T16:21:24.727+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.763+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.764+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:24.764+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be23') } 2015-04-01T16:21:24.764+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be24') } 2015-04-01T16:21:24.765+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.770+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.770+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.770+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be25') } 2015-04-01T16:21:24.771+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.917+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.918+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.918+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "fs.files" } 2015-04-01T16:21:24.918+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.fs.files {} 2015-04-01T16:21:24.919+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:1b3000 2015-04-01T16:21:24.919+0000 D STORAGE [repl writer worker 15] Tests04011621.fs.files: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.919+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:24.919+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:10f000 2015-04-01T16:21:24.919+0000 D STORAGE [repl writer worker 15] Tests04011621.fs.files: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.920+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.921+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.921+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.922+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:24.922+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:32768 fromFreeList: 0 eloc: 0:1b5000 2015-04-01T16:21:24.922+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:24.922+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:1bd000 2015-04-01T16:21:24.922+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.fs.files properties: { v: 1, key: { filename: 1, uploadDate: 1 }, name: "filename_1_uploadDate_1", ns: "Tests04011621.fs.files" } 2015-04-01T16:21:24.922+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:24.922+0000 D STORAGE [repl writer worker 15] Tests04011621.fs.files: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.922+0000 D INDEX [repl writer worker 15] bulk commit starting for index: filename_1_uploadDate_1 2015-04-01T16:21:24.923+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:24.923+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:24.923+0000 D STORAGE [repl writer worker 15] Tests04011621.fs.files: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.923+0000 D STORAGE [repl writer worker 15] Tests04011621.fs.files: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.924+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.924+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.924+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.925+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "fs.chunks" } 2015-04-01T16:21:24.925+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.fs.chunks {} 2015-04-01T16:21:24.925+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:1dd000 2015-04-01T16:21:24.925+0000 D STORAGE [repl writer worker 15] Tests04011621.fs.chunks: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.925+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:24.926+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:1df000 2015-04-01T16:21:24.926+0000 D STORAGE [repl writer worker 15] Tests04011621.fs.chunks: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.927+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.927+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.927+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.928+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:24.928+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:1ff000 2015-04-01T16:21:24.928+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.fs.chunks properties: { v: 1, unique: true, key: { files_id: 1, n: 1 }, name: "files_id_1_n_1", ns: "Tests04011621.fs.chunks" } 2015-04-01T16:21:24.928+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:24.928+0000 D STORAGE [repl writer worker 15] Tests04011621.fs.chunks: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.928+0000 D INDEX [repl writer worker 15] bulk commit starting for index: files_id_1_n_1 2015-04-01T16:21:24.928+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:24.928+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:24.928+0000 D STORAGE [repl writer worker 15] Tests04011621.fs.chunks: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.929+0000 D STORAGE [repl writer worker 15] Tests04011621.fs.chunks: clearing plan cache - collection info cache reset 2015-04-01T16:21:24.930+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|14, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.930+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.931+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be26') } 2015-04-01T16:21:24.932+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.946+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:24.946+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.946+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.946+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be27') } 2015-04-01T16:21:24.947+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.961+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.962+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.962+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be26') } 2015-04-01T16:21:24.962+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.974+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.974+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.974+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be26') } 2015-04-01T16:21:24.975+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.977+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.977+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.978+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be27') } 2015-04-01T16:21:24.978+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.982+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.982+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.982+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be28') } 2015-04-01T16:21:24.982+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|20, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.989+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.990+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.991+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be29') } 2015-04-01T16:21:24.991+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.993+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.993+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.993+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be2a') } 2015-04-01T16:21:24.994+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.996+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:24.997+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:24.997+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be2b') } 2015-04-01T16:21:24.998+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|23, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:24.999+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.000+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.000+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be28') } 2015-04-01T16:21:25.000+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905284000|24, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.010+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.011+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.011+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be28') } 2015-04-01T16:21:25.012+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.013+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.014+0000 D COMMAND [conn15] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:25.014+0000 D COMMAND [conn15] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:25.014+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:25.014+0000 I COMMAND [conn15] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:25.014+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be29') } 2015-04-01T16:21:25.014+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be2a') } 2015-04-01T16:21:25.014+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be2b') } 2015-04-01T16:21:25.015+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.019+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.020+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.020+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be2c') } 2015-04-01T16:21:25.021+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.022+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.022+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.022+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be2d') } 2015-04-01T16:21:25.023+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.025+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:25.025+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.025+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.026+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be2e') } 2015-04-01T16:21:25.026+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.028+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.028+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.029+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be2f') } 2015-04-01T16:21:25.029+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.031+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.031+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.032+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be2c') } 2015-04-01T16:21:25.032+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.041+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.041+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.042+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be2c') } 2015-04-01T16:21:25.043+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.044+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.045+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:25.045+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be2d') } 2015-04-01T16:21:25.045+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be2e') } 2015-04-01T16:21:25.046+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be2f') } 2015-04-01T16:21:25.046+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.050+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.051+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.051+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be31') } 2015-04-01T16:21:25.051+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be30') } 2015-04-01T16:21:25.052+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.054+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.055+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.055+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be32') } 2015-04-01T16:21:25.055+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.058+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.058+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.058+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be33') } 2015-04-01T16:21:25.059+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.060+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.061+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.061+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be30') } 2015-04-01T16:21:25.062+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.065+0000 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:25.066+0000 D COMMAND [conn14] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:25.066+0000 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:25.071+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.072+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.073+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be30') } 2015-04-01T16:21:25.075+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.075+0000 D REPL [rsBackgroundSync] bgsync buffer has 212 bytes 2015-04-01T16:21:25.076+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.077+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:25.077+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be31') } 2015-04-01T16:21:25.078+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be32') } 2015-04-01T16:21:25.078+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be33') } 2015-04-01T16:21:25.078+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.079+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.079+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.079+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be34') } 2015-04-01T16:21:25.079+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|23, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.086+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.087+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.087+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be34') } 2015-04-01T16:21:25.087+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|24, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.093+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.093+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.094+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be34') } 2015-04-01T16:21:25.094+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|25, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.101+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.101+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.102+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:25.102+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|26, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.106+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.107+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.107+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be35') } 2015-04-01T16:21:25.107+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|27, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.109+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.109+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.109+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:25.110+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|28, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.118+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.119+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.119+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:25.119+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|29, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.121+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.122+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.122+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be35') } 2015-04-01T16:21:25.122+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|30, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.127+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.128+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.128+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be36') } 2015-04-01T16:21:25.128+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|31, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.130+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.131+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.131+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be37') } 2015-04-01T16:21:25.131+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|32, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.134+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.135+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.135+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be36') } 2015-04-01T16:21:25.135+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|33, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.154+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.154+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.154+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be36') } 2015-04-01T16:21:25.154+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|34, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.157+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.157+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.157+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be37') } 2015-04-01T16:21:25.157+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|35, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.162+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.162+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.163+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be38') } 2015-04-01T16:21:25.163+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|36, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.165+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:25.165+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.165+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.166+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be39') } 2015-04-01T16:21:25.166+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|37, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.171+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.171+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.172+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be38') } 2015-04-01T16:21:25.172+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|38, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.180+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:25.180+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.181+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:25.181+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.181+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:27.181Z 2015-04-01T16:21:25.181+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be39') } 2015-04-01T16:21:25.181+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|39, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.186+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.186+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.187+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be38') } 2015-04-01T16:21:25.187+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|40, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.192+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.193+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.193+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be39') } 2015-04-01T16:21:25.193+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be38') } 2015-04-01T16:21:25.193+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|42, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.209+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.210+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.210+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3a') } 2015-04-01T16:21:25.212+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|43, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.212+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.212+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3b') } 2015-04-01T16:21:25.213+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|44, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.218+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.218+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3a') } 2015-04-01T16:21:25.219+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|45, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.221+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.221+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.222+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3b') } 2015-04-01T16:21:25.222+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|46, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.224+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.224+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.225+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3a') } 2015-04-01T16:21:25.225+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|47, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.244+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.245+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.245+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3a') } 2015-04-01T16:21:25.246+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|48, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.247+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.248+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.248+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3b') } 2015-04-01T16:21:25.248+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|49, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.253+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.254+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.254+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3c') } 2015-04-01T16:21:25.254+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|50, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.256+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.257+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.257+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3d') } 2015-04-01T16:21:25.257+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|51, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.260+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:25.260+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.260+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.260+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3c') } 2015-04-01T16:21:25.260+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|52, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.267+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.267+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.268+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3d') } 2015-04-01T16:21:25.268+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|53, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.272+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.272+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.272+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3c') } 2015-04-01T16:21:25.273+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|54, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.283+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.284+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.284+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3c') } 2015-04-01T16:21:25.285+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|55, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.286+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.287+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3d') } 2015-04-01T16:21:25.287+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|56, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.300+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.300+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.301+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3f') } 2015-04-01T16:21:25.301+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|57, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.305+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.305+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.306+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3e') } 2015-04-01T16:21:25.306+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|58, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.311+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.312+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.313+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be41') } 2015-04-01T16:21:25.313+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|59, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.314+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.315+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.315+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be40') } 2015-04-01T16:21:25.315+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|60, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.327+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.328+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.328+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3e') } 2015-04-01T16:21:25.329+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|61, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.330+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.331+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.331+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be3f') } 2015-04-01T16:21:25.332+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|62, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.333+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.334+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.334+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be40') } 2015-04-01T16:21:25.335+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|63, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.336+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.337+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.337+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be41') } 2015-04-01T16:21:25.337+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|64, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.339+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.339+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.339+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be43') } 2015-04-01T16:21:25.340+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|65, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.342+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.343+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.343+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be42') } 2015-04-01T16:21:25.343+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|66, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.348+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:25.348+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.349+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.349+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be45') } 2015-04-01T16:21:25.350+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|67, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.351+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.351+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.352+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be44') } 2015-04-01T16:21:25.353+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|68, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.362+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.362+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.362+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be42') } 2015-04-01T16:21:25.363+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|69, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.365+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.365+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.365+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be43') } 2015-04-01T16:21:25.366+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|70, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.368+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.369+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.369+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be45') } 2015-04-01T16:21:25.369+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be44') } 2015-04-01T16:21:25.371+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|72, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.374+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.375+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.375+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be47') } 2015-04-01T16:21:25.376+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|73, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.377+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.377+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.378+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be46') } 2015-04-01T16:21:25.378+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|74, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.383+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.383+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.383+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be49') } 2015-04-01T16:21:25.384+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|75, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.386+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.387+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.387+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be48') } 2015-04-01T16:21:25.387+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|76, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.389+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:25.390+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:25.390+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:27.390Z 2015-04-01T16:21:25.393+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.394+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.395+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be46') } 2015-04-01T16:21:25.395+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|77, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.396+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.397+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.397+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be47') } 2015-04-01T16:21:25.397+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|78, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.399+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.400+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.400+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be48') } 2015-04-01T16:21:25.400+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|79, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.402+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.402+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.402+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be49') } 2015-04-01T16:21:25.403+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|80, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.408+0000 D REPL [rsBackgroundSync] bgsync buffer has 176 bytes 2015-04-01T16:21:25.408+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.408+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.408+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be4a') } 2015-04-01T16:21:25.408+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be4b') } 2015-04-01T16:21:25.409+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|82, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.411+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.412+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.412+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be4a') } 2015-04-01T16:21:25.412+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|83, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.420+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.420+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.421+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be4a') } 2015-04-01T16:21:25.421+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|84, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.423+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.424+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.424+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be4b') } 2015-04-01T16:21:25.425+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|85, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.429+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.429+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.430+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be4c') } 2015-04-01T16:21:25.430+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be4d') } 2015-04-01T16:21:25.430+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|87, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.432+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.433+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.433+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be4c') } 2015-04-01T16:21:25.433+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|88, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.440+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.440+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.441+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be4c') } 2015-04-01T16:21:25.442+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|89, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.443+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.444+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.444+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be4d') } 2015-04-01T16:21:25.444+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|90, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.446+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.446+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.446+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be4e') } 2015-04-01T16:21:25.447+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|91, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.449+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.450+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.450+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be4f') } 2015-04-01T16:21:25.451+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|92, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.453+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.453+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.453+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be4e') } 2015-04-01T16:21:25.454+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|93, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.460+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.461+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.461+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be4e') } 2015-04-01T16:21:25.462+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|94, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.464+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.465+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.465+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be4f') } 2015-04-01T16:21:25.465+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|95, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.466+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.467+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.467+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be50') } 2015-04-01T16:21:25.468+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|96, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.469+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:25.470+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.470+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.471+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be51') } 2015-04-01T16:21:25.471+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|97, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.473+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.473+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.473+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be50') } 2015-04-01T16:21:25.473+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|98, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.479+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.479+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.479+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be50') } 2015-04-01T16:21:25.480+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|99, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.482+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.482+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.482+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be51') } 2015-04-01T16:21:25.482+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|100, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.492+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.492+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.493+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be53') } 2015-04-01T16:21:25.493+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|101, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.494+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.494+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.495+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be52') } 2015-04-01T16:21:25.495+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|102, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.500+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.500+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.500+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be54') } 2015-04-01T16:21:25.501+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be55') } 2015-04-01T16:21:25.501+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|104, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.506+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.506+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.507+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be52') } 2015-04-01T16:21:25.507+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be53') } 2015-04-01T16:21:25.507+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|106, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.510+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.511+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.511+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be54') } 2015-04-01T16:21:25.511+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|107, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.513+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.513+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.513+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be55') } 2015-04-01T16:21:25.514+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|108, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.520+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.520+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.521+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be57') } 2015-04-01T16:21:25.521+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|109, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.524+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.524+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.524+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be56') } 2015-04-01T16:21:25.524+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|110, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.529+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.529+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.530+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be56') } 2015-04-01T16:21:25.530+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|111, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.532+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:25.533+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.533+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.533+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be57') } 2015-04-01T16:21:25.534+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|112, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.539+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.539+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.539+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be59') } 2015-04-01T16:21:25.540+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|113, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.542+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.543+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.543+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be58') } 2015-04-01T16:21:25.543+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|114, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.548+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.548+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.549+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be59') } 2015-04-01T16:21:25.549+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be58') } 2015-04-01T16:21:25.549+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|116, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.557+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.557+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.557+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be5b') } 2015-04-01T16:21:25.558+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|117, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.560+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.560+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.561+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be5a') } 2015-04-01T16:21:25.561+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|118, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.568+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.568+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.568+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be5a') } 2015-04-01T16:21:25.569+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|119, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.571+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.572+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.572+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be5b') } 2015-04-01T16:21:25.572+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|120, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.577+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.577+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.578+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be5d') } 2015-04-01T16:21:25.578+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|121, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.580+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.580+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.580+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be5c') } 2015-04-01T16:21:25.581+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|122, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.586+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.586+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.587+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be5c') } 2015-04-01T16:21:25.587+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|123, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.589+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.589+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.589+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be5d') } 2015-04-01T16:21:25.590+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|124, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.596+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.596+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.596+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be5f') } 2015-04-01T16:21:25.597+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|125, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.598+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.598+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.599+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be5e') } 2015-04-01T16:21:25.599+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|126, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.606+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:25.606+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.606+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.607+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be5e') } 2015-04-01T16:21:25.608+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|127, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.610+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.610+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.611+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be5f') } 2015-04-01T16:21:25.611+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|128, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.614+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.614+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.614+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be61') } 2015-04-01T16:21:25.614+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be60') } 2015-04-01T16:21:25.615+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|130, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.621+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.622+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.622+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be60') } 2015-04-01T16:21:25.623+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|131, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.624+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.625+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.625+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be61') } 2015-04-01T16:21:25.625+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|132, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.639+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.639+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.641+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be63') } 2015-04-01T16:21:25.641+0000 D STORAGE [repl writer worker 14] allocating new extent 2015-04-01T16:21:25.641+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:4194304 fromFreeList: 0 eloc: 0:21f000 2015-04-01T16:21:25.659+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.659+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|133, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.660+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.661+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be64') } 2015-04-01T16:21:25.662+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be62') } 2015-04-01T16:21:25.673+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|135, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.687+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.687+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.688+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be62') } 2015-04-01T16:21:25.689+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|136, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.696+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.697+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.697+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be63') } 2015-04-01T16:21:25.698+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be64') } 2015-04-01T16:21:25.698+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|138, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.699+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.700+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.701+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be66') } 2015-04-01T16:21:25.701+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|139, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.703+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.703+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.703+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be65') } 2015-04-01T16:21:25.704+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|140, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.713+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.713+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.714+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be65') } 2015-04-01T16:21:25.714+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|141, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.716+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:25.716+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.716+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.717+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be66') } 2015-04-01T16:21:25.717+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|142, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.719+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.720+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.720+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be68') } 2015-04-01T16:21:25.720+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|143, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.722+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.722+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.723+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be67') } 2015-04-01T16:21:25.723+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|144, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.735+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.735+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.735+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be67') } 2015-04-01T16:21:25.735+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|145, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.738+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.739+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.739+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be68') } 2015-04-01T16:21:25.739+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|146, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.741+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.742+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.742+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be6a') } 2015-04-01T16:21:25.742+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|147, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.744+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.745+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.745+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be69') } 2015-04-01T16:21:25.746+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|148, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.752+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.752+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.752+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be69') } 2015-04-01T16:21:25.752+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|149, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.755+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.756+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.756+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be6a') } 2015-04-01T16:21:25.756+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|150, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.758+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.759+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.759+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be6c') } 2015-04-01T16:21:25.760+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|151, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.761+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.761+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.762+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be6b') } 2015-04-01T16:21:25.762+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|152, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.768+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.769+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.769+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be6b') } 2015-04-01T16:21:25.769+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|153, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.771+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.772+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.772+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be6c') } 2015-04-01T16:21:25.772+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|154, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.777+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.778+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.778+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be6e') } 2015-04-01T16:21:25.779+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|155, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.781+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.781+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.781+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be6d') } 2015-04-01T16:21:25.781+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|156, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.786+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:25.786+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.786+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.786+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be6d') } 2015-04-01T16:21:25.787+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|157, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.789+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.789+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.789+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be6e') } 2015-04-01T16:21:25.790+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|158, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.792+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.792+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be70') } 2015-04-01T16:21:25.793+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|159, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.795+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.795+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.795+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be6f') } 2015-04-01T16:21:25.795+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|160, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.798+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.798+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.799+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be72') } 2015-04-01T16:21:25.799+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|161, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.801+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.801+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.801+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be71') } 2015-04-01T16:21:25.802+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|162, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.809+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.809+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.809+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be6f') } 2015-04-01T16:21:25.809+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|163, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.812+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.812+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.813+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be70') } 2015-04-01T16:21:25.813+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|164, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.815+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.816+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.816+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be72') } 2015-04-01T16:21:25.817+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be71') } 2015-04-01T16:21:25.817+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|166, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.819+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.819+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.819+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be74') } 2015-04-01T16:21:25.820+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|167, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.822+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.822+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.822+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be73') } 2015-04-01T16:21:25.822+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|168, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.825+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.825+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.826+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be76') } 2015-04-01T16:21:25.826+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|169, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.828+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.829+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.829+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be75') } 2015-04-01T16:21:25.829+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|170, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.836+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.836+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.837+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be73') } 2015-04-01T16:21:25.837+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|171, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.839+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:25.840+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.840+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.840+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be74') } 2015-04-01T16:21:25.840+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|172, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.843+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.843+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.843+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be75') } 2015-04-01T16:21:25.844+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be76') } 2015-04-01T16:21:25.844+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|174, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.848+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.849+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.849+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be77') } 2015-04-01T16:21:25.849+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be78') } 2015-04-01T16:21:25.850+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|176, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.856+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.856+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.856+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be77') } 2015-04-01T16:21:25.857+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|177, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.865+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.865+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.866+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be77') } 2015-04-01T16:21:25.866+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|178, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.868+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.869+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.869+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be78') } 2015-04-01T16:21:25.870+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|179, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.871+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.872+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.873+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7a') } 2015-04-01T16:21:25.873+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|180, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.874+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.875+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.875+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be79') } 2015-04-01T16:21:25.876+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|181, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.878+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.878+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.878+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be79') } 2015-04-01T16:21:25.879+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|182, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.886+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.887+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.888+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be79') } 2015-04-01T16:21:25.888+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|183, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.890+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.890+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.890+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7a') } 2015-04-01T16:21:25.891+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|184, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.895+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.895+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.896+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7b') } 2015-04-01T16:21:25.896+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|185, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.898+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.899+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.899+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7c') } 2015-04-01T16:21:25.899+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|186, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.901+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:25.901+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.902+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.902+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7b') } 2015-04-01T16:21:25.903+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|187, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.909+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.910+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.910+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7b') } 2015-04-01T16:21:25.910+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|188, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.912+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.913+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.913+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7c') } 2015-04-01T16:21:25.913+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|189, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.916+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.916+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.917+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7d') } 2015-04-01T16:21:25.917+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|190, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.920+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.920+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.920+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7e') } 2015-04-01T16:21:25.920+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|191, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.922+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.923+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.923+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7d') } 2015-04-01T16:21:25.923+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|192, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.929+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.929+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.930+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be80') } 2015-04-01T16:21:25.930+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|193, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.932+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.932+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.932+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7f') } 2015-04-01T16:21:25.933+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|194, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.938+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.939+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.939+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7f') } 2015-04-01T16:21:25.940+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7f') } 2015-04-01T16:21:25.940+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|196, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.944+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.944+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.945+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be82') } 2015-04-01T16:21:25.945+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|197, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.948+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.948+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.948+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be81') } 2015-04-01T16:21:25.948+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|198, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.951+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.951+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.952+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be81') } 2015-04-01T16:21:25.952+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|199, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.955+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.955+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.955+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be81') } 2015-04-01T16:21:25.955+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|200, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.961+0000 D REPL [rsBackgroundSync] bgsync buffer has 153 bytes 2015-04-01T16:21:25.961+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.962+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.962+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be84') } 2015-04-01T16:21:25.962+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|201, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.963+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.963+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.963+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be83') } 2015-04-01T16:21:25.963+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|202, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.967+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.967+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.967+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be83') } 2015-04-01T16:21:25.968+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be83') } 2015-04-01T16:21:25.968+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|204, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.976+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.977+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.977+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7d') } 2015-04-01T16:21:25.977+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|205, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.979+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.980+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.980+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7e') } 2015-04-01T16:21:25.980+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|206, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.983+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.983+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.983+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be7f') } 2015-04-01T16:21:25.983+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|207, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.987+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.987+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.987+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be80') } 2015-04-01T16:21:25.988+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|208, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.989+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.990+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.991+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be81') } 2015-04-01T16:21:25.991+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be82') } 2015-04-01T16:21:25.991+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|210, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.993+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.994+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:25.994+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be83') } 2015-04-01T16:21:25.994+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be84') } 2015-04-01T16:21:25.995+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|212, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:25.998+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:25.998+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:25.998+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be86') } 2015-04-01T16:21:25.999+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|213, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.001+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.002+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.002+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be85') } 2015-04-01T16:21:26.002+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905285000|214, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.008+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.009+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.009+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be85') } 2015-04-01T16:21:26.009+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.011+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.012+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.012+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b05e15b5605d452be86') } 2015-04-01T16:21:26.012+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.014+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:26.014+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.015+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.015+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be88') } 2015-04-01T16:21:26.016+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.017+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.018+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.018+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be87') } 2015-04-01T16:21:26.019+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.038+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.039+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.039+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b04e15b5605d452be25') } 2015-04-01T16:21:26.040+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.069+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.069+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.070+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be89') } 2015-04-01T16:21:26.070+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.075+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.076+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:26.076+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be8a') } 2015-04-01T16:21:26.076+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be8b') } 2015-04-01T16:21:26.077+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be8c') } 2015-04-01T16:21:26.077+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.079+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.079+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.079+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be8d') } 2015-04-01T16:21:26.080+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.085+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.086+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.087+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be89') } 2015-04-01T16:21:26.087+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.088+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.089+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:26.091+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be8a') } 2015-04-01T16:21:26.091+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be8b') } 2015-04-01T16:21:26.091+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be8c') } 2015-04-01T16:21:26.091+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be8d') } 2015-04-01T16:21:26.091+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:26.091+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: null } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:26.092+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.092+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.094+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.094+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:26.094+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: null } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:26.094+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.100+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:26.100+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.101+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.102+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:26.102+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: null } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:26.102+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.108+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.108+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.109+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(3, 1FB6A1E0D284294C9652001955E82952) } 2015-04-01T16:21:26.109+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.115+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.115+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.115+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(3, 1FB6A1E0D284294C9652001955E82952) } 2015-04-01T16:21:26.115+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|20, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.122+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.123+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.123+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(3, 1FB6A1E0D284294C9652001955E82952) } 2015-04-01T16:21:26.124+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.125+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.126+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.126+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be8e') } 2015-04-01T16:21:26.127+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be8e') } 2015-04-01T16:21:26.128+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|23, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.132+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.132+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.133+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be8e') } 2015-04-01T16:21:26.133+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|24, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.135+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.136+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.136+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(3, C1D295DDFF9C854AB2008D8C8DB19CFF) } 2015-04-01T16:21:26.136+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(3, C1D295DDFF9C854AB2008D8C8DB19CFF) } 2015-04-01T16:21:26.137+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|26, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.143+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.143+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.143+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(3, C1D295DDFF9C854AB2008D8C8DB19CFF) } 2015-04-01T16:21:26.144+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|27, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.146+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.147+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.147+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be8f') } 2015-04-01T16:21:26.147+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be8f') } 2015-04-01T16:21:26.148+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|29, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.152+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.153+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.153+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be8f') } 2015-04-01T16:21:26.153+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|30, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.155+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.156+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.156+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 123 } 2015-04-01T16:21:26.156+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 123 } 2015-04-01T16:21:26.157+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|32, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.162+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:26.163+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.164+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.164+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 123 } 2015-04-01T16:21:26.164+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|33, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.166+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.166+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.167+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 123 } 2015-04-01T16:21:26.167+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 123 } 2015-04-01T16:21:26.167+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|35, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.172+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.172+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.173+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 123 } 2015-04-01T16:21:26.173+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|36, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.175+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.176+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.176+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be90') } 2015-04-01T16:21:26.177+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be90') } 2015-04-01T16:21:26.178+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|38, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.182+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.182+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.182+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be90') } 2015-04-01T16:21:26.183+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|39, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.184+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.185+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.185+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "123" } 2015-04-01T16:21:26.186+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|40, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.187+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.187+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.188+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "123" } 2015-04-01T16:21:26.188+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|41, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.193+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.193+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.193+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "123" } 2015-04-01T16:21:26.193+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|42, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.202+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.202+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.203+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(3, E90E4629FDD23C4E8CF475C4918F37BA) } 2015-04-01T16:21:26.203+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|43, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.204+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.205+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.206+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(3, E90E4629FDD23C4E8CF475C4918F37BA) } 2015-04-01T16:21:26.206+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|44, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.236+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.237+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.237+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(3, E90E4629FDD23C4E8CF475C4918F37BA) } 2015-04-01T16:21:26.238+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|45, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.239+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.240+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.241+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(3, D81137FCDF44914F9D962F271B8AF457) } 2015-04-01T16:21:26.241+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|46, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.244+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.245+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.246+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:26.246+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(3, D81137FCDF44914F9D962F271B8AF457) } 2015-04-01T16:21:26.247+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|47, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.247+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.248+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.248+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(3, D81137FCDF44914F9D962F271B8AF457) } 2015-04-01T16:21:26.249+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|48, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.253+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.253+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.253+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 123 } 2015-04-01T16:21:26.253+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 123 } 2015-04-01T16:21:26.254+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|50, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.261+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.261+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.261+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 123 } 2015-04-01T16:21:26.262+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|51, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.265+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.265+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.265+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 123 } 2015-04-01T16:21:26.266+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|52, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.267+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.268+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.269+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 123 } 2015-04-01T16:21:26.269+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|53, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.275+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.276+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.276+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 123 } 2015-04-01T16:21:26.276+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|54, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.285+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.285+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.285+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be91') } 2015-04-01T16:21:26.285+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|55, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.287+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.288+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.289+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be91') } 2015-04-01T16:21:26.289+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|56, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.295+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.296+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.297+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be91') } 2015-04-01T16:21:26.297+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|57, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.299+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.299+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.299+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be92') } 2015-04-01T16:21:26.300+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be92') } 2015-04-01T16:21:26.300+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|59, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.305+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.305+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.305+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be92') } 2015-04-01T16:21:26.305+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|60, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.311+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.311+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.312+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "123" } 2015-04-01T16:21:26.312+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|61, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.317+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.318+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.318+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "123" } 2015-04-01T16:21:26.318+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|62, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.330+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:26.330+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.330+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.331+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "123" } 2015-04-01T16:21:26.332+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|63, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.334+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.334+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.334+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be93') } 2015-04-01T16:21:26.334+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|64, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.341+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.341+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.342+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be93') } 2015-04-01T16:21:26.342+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|65, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.347+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.347+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.348+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be94') } 2015-04-01T16:21:26.348+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be95') } 2015-04-01T16:21:26.348+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|67, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.360+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.360+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.361+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be94') } 2015-04-01T16:21:26.361+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be95') } 2015-04-01T16:21:26.361+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|69, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.367+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.367+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.367+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be96') } 2015-04-01T16:21:26.367+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|70, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.370+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.371+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.371+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be96') } 2015-04-01T16:21:26.371+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|71, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.374+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.374+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.374+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be96') } 2015-04-01T16:21:26.375+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be96') } 2015-04-01T16:21:26.375+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|73, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.388+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.388+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.389+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be96') } 2015-04-01T16:21:26.389+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|74, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.392+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.393+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:26.393+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.393+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.393+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 3 } 2015-04-01T16:21:26.394+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:26.395+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|77, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.395+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.396+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:26.396+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 4 } 2015-04-01T16:21:26.396+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 5 } 2015-04-01T16:21:26.397+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 6 } 2015-04-01T16:21:26.397+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.398+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|80, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.398+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.399+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 7 } 2015-04-01T16:21:26.399+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 8 } 2015-04-01T16:21:26.399+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|82, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.401+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.401+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:26.401+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 9 } 2015-04-01T16:21:26.402+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 10 } 2015-04-01T16:21:26.402+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 11 } 2015-04-01T16:21:26.402+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|85, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.417+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.418+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.418+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.418+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.418+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|87, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.420+0000 D REPL [rsBackgroundSync] bgsync buffer has 515 bytes 2015-04-01T16:21:26.420+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.424+0000 D REPL [rsSync] replication batch size is 10 2015-04-01T16:21:26.424+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 3 } 2015-04-01T16:21:26.424+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 4 } 2015-04-01T16:21:26.424+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 5 } 2015-04-01T16:21:26.424+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 6 } 2015-04-01T16:21:26.425+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 7 } 2015-04-01T16:21:26.425+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 8 } 2015-04-01T16:21:26.425+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 9 } 2015-04-01T16:21:26.425+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 10 } 2015-04-01T16:21:26.425+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 11 } 2015-04-01T16:21:26.425+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.425+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|97, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.460+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.460+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.461+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.461+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|98, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.464+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.465+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.465+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 3 } 2015-04-01T16:21:26.465+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 4 } 2015-04-01T16:21:26.465+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|100, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.477+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.477+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.478+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.478+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.478+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|102, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.480+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.481+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:26.481+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 3 } 2015-04-01T16:21:26.481+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 4 } 2015-04-01T16:21:26.481+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.482+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.482+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 3 } 2015-04-01T16:21:26.483+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|107, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.484+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:26.484+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.485+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.486+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 4 } 2015-04-01T16:21:26.486+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 5 } 2015-04-01T16:21:26.486+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|109, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.487+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.489+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:26.489+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 6 } 2015-04-01T16:21:26.489+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 7 } 2015-04-01T16:21:26.489+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 8 } 2015-04-01T16:21:26.489+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|112, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.491+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.491+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.492+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 9 } 2015-04-01T16:21:26.492+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 10 } 2015-04-01T16:21:26.493+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|114, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.503+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.504+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:26.504+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.505+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.505+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 3 } 2015-04-01T16:21:26.506+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|117, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.506+0000 D REPL [rsBackgroundSync] bgsync buffer has 515 bytes 2015-04-01T16:21:26.506+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.507+0000 D REPL [rsSync] replication batch size is 10 2015-04-01T16:21:26.507+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 4 } 2015-04-01T16:21:26.507+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 5 } 2015-04-01T16:21:26.507+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 6 } 2015-04-01T16:21:26.507+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 7 } 2015-04-01T16:21:26.508+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 8 } 2015-04-01T16:21:26.508+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 9 } 2015-04-01T16:21:26.508+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 10 } 2015-04-01T16:21:26.508+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.508+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.508+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 3 } 2015-04-01T16:21:26.508+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|127, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.512+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.512+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:26.513+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 4 } 2015-04-01T16:21:26.513+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 5 } 2015-04-01T16:21:26.513+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 6 } 2015-04-01T16:21:26.514+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|130, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.519+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.520+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.520+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.521+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|131, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.522+0000 D REPL [rsBackgroundSync] bgsync buffer has 625 bytes 2015-04-01T16:21:26.523+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.523+0000 D REPL [rsSync] replication batch size is 7 2015-04-01T16:21:26.524+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.524+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 3 } 2015-04-01T16:21:26.524+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 4 } 2015-04-01T16:21:26.524+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 5 } 2015-04-01T16:21:26.525+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 6 } 2015-04-01T16:21:26.525+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.525+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.526+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|138, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.534+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.534+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.535+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.535+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.535+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|140, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.537+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.538+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:26.538+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.538+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.538+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 3 } 2015-04-01T16:21:26.539+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|143, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.540+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.541+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.541+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 4 } 2015-04-01T16:21:26.541+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 5 } 2015-04-01T16:21:26.541+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|145, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.543+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.544+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:26.544+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 6 } 2015-04-01T16:21:26.545+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 7 } 2015-04-01T16:21:26.545+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 8 } 2015-04-01T16:21:26.545+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|148, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.546+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.547+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.547+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 9 } 2015-04-01T16:21:26.547+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 10 } 2015-04-01T16:21:26.548+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|150, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.549+0000 D REPL [rsBackgroundSync] bgsync buffer has 220 bytes 2015-04-01T16:21:26.549+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.550+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:26.551+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 11 } 2015-04-01T16:21:26.551+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 12 } 2015-04-01T16:21:26.551+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 13 } 2015-04-01T16:21:26.552+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|153, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.552+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.555+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.556+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 14 } 2015-04-01T16:21:26.556+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 15 } 2015-04-01T16:21:26.556+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|155, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.567+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.567+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.567+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.567+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.568+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|157, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.570+0000 D REPL [rsBackgroundSync] bgsync buffer has 1030 bytes 2015-04-01T16:21:26.571+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.573+0000 D REPL [rsSync] replication batch size is 15 2015-04-01T16:21:26.573+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 3 } 2015-04-01T16:21:26.573+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 4 } 2015-04-01T16:21:26.573+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 5 } 2015-04-01T16:21:26.573+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 6 } 2015-04-01T16:21:26.573+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 7 } 2015-04-01T16:21:26.573+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 8 } 2015-04-01T16:21:26.573+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 9 } 2015-04-01T16:21:26.574+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 10 } 2015-04-01T16:21:26.574+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 11 } 2015-04-01T16:21:26.574+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 12 } 2015-04-01T16:21:26.574+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 13 } 2015-04-01T16:21:26.574+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 14 } 2015-04-01T16:21:26.574+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 15 } 2015-04-01T16:21:26.574+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.575+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.575+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|172, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.576+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.577+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.577+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.577+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.577+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|174, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.580+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.580+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.580+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.580+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.581+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|176, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.590+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.591+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.592+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.592+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 2 } 2015-04-01T16:21:26.592+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|178, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.598+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.598+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.598+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be98') } 2015-04-01T16:21:26.598+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|179, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.687+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.687+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.688+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be87') } 2015-04-01T16:21:26.688+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|180, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.691+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.691+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.691+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be88') } 2015-04-01T16:21:26.692+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|181, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.694+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.694+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.695+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be9a') } 2015-04-01T16:21:26.696+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|182, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.697+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:26.698+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.698+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.698+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be99') } 2015-04-01T16:21:26.698+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be99') } 2015-04-01T16:21:26.699+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|184, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.701+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.701+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.702+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be99') } 2015-04-01T16:21:26.702+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|185, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.704+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.705+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:26.706+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be99') } 2015-04-01T16:21:26.706+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be99') } 2015-04-01T16:21:26.706+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be99') } 2015-04-01T16:21:26.707+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|188, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.714+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.715+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.715+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be98') } 2015-04-01T16:21:26.715+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|189, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.723+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.723+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.724+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be9b') } 2015-04-01T16:21:26.725+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|190, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.734+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.734+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.735+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be9b') } 2015-04-01T16:21:26.735+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|191, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.737+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.737+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.738+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { AccountId: 1, Index: 2 } } 2015-04-01T16:21:26.738+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|192, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.744+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.744+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.744+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _t: "IdWithExtraField", AccountId: 3, Index: 4, Extra: 5 } } 2015-04-01T16:21:26.745+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|193, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.761+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.762+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.762+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { AccountId: 1, Index: 2 } } 2015-04-01T16:21:26.762+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _t: "IdWithExtraField", AccountId: 3, Index: 4, Extra: 5 } } 2015-04-01T16:21:26.762+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|195, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.765+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.766+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.766+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.766+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.767+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|197, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.771+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:26.771+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.771+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:26.772+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.772+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.772+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|199, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.774+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.775+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.775+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.776+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|200, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.780+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.781+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:26.781+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.781+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.781+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.782+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|203, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.789+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.789+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.790+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:26.790+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|204, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.795+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.796+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.796+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be9c') } 2015-04-01T16:21:26.797+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|205, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.799+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.799+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.800+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be9c') } 2015-04-01T16:21:26.800+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|206, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.815+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.815+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.815+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be9c') } 2015-04-01T16:21:26.816+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|207, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.828+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.829+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.829+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be9d') } 2015-04-01T16:21:26.830+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|208, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.837+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.837+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.838+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be9d') } 2015-04-01T16:21:26.838+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|209, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.889+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.889+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.889+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be9e') } 2015-04-01T16:21:26.890+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|210, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.944+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.944+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.945+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be9e') } 2015-04-01T16:21:26.945+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|211, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.947+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.947+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.948+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be9f') } 2015-04-01T16:21:26.948+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|212, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.953+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:26.954+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.954+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.954+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452be9f') } 2015-04-01T16:21:26.955+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|213, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.957+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.957+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.957+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452bea0') } 2015-04-01T16:21:26.958+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|214, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.962+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.963+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.963+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452bea0') } 2015-04-01T16:21:26.963+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|215, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.965+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.966+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.966+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452bea1') } 2015-04-01T16:21:26.966+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|216, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.969+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.969+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.970+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452bea1') } 2015-04-01T16:21:26.970+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|217, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.976+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.976+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.977+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452bea3') } 2015-04-01T16:21:26.977+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|218, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.983+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.983+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.983+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452bea3') } 2015-04-01T16:21:26.984+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|219, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.986+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.986+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.986+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452bea4') } 2015-04-01T16:21:26.987+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|220, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.991+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.991+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.991+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452bea4') } 2015-04-01T16:21:26.991+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|221, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.994+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.994+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.994+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452bea5') } 2015-04-01T16:21:26.995+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|222, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:26.997+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:26.997+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:26.997+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b06e15b5605d452bea5') } 2015-04-01T16:21:26.998+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905286000|223, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.014+0000 D COMMAND [conn15] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:27.014+0000 D COMMAND [conn15] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:27.014+0000 I COMMAND [conn15] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:27.032+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.033+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.033+0000 D COMMAND [repl writer worker 14] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:27.033+0000 I COMMAND [repl writer worker 14] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:27.033+0000 D STORAGE [repl writer worker 14] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:27.033+0000 D INDEX [repl writer worker 14] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.033+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.034+0000 D STORAGE [repl writer worker 14] dropIndexes done 2015-04-01T16:21:27.034+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.040+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.040+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.041+0000 D COMMAND [repl writer worker 14] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:27.041+0000 D STORAGE [repl writer worker 14] create collection Tests04011621.testcollection {} 2015-04-01T16:21:27.041+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:27.041+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.041+0000 D STORAGE [repl writer worker 14] allocating new extent 2015-04-01T16:21:27.041+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:27.041+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.042+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.044+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.045+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.045+0000 D STORAGE [repl writer worker 14] allocating new extent 2015-04-01T16:21:27.045+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:61f000 2015-04-01T16:21:27.046+0000 I INDEX [repl writer worker 14] build index on: Tests04011621.testcollection properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.046+0000 I INDEX [repl writer worker 14] building index using bulk method 2015-04-01T16:21:27.046+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.046+0000 D INDEX [repl writer worker 14] bulk commit starting for index: x_1 2015-04-01T16:21:27.046+0000 D INDEX [repl writer worker 14] done building bottom layer, going to commit 2015-04-01T16:21:27.046+0000 I INDEX [repl writer worker 14] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:27.046+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.046+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.046+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.047+0000 D REPL [rsBackgroundSync] bgsync buffer has 105 bytes 2015-04-01T16:21:27.048+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.048+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.048+0000 D COMMAND [repl writer worker 14] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:27.048+0000 I COMMAND [repl writer worker 14] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:27.048+0000 D STORAGE [repl writer worker 14] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:27.048+0000 D INDEX [repl writer worker 14] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.049+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.049+0000 D INDEX [repl writer worker 14] dropAllIndexes dropping: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.049+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.049+0000 D STORAGE [repl writer worker 14] dropIndexes done 2015-04-01T16:21:27.049+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.050+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.051+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.052+0000 D COMMAND [repl writer worker 14] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:27.052+0000 D STORAGE [repl writer worker 14] create collection Tests04011621.testcollection {} 2015-04-01T16:21:27.052+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:27.052+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.052+0000 D STORAGE [repl writer worker 14] allocating new extent 2015-04-01T16:21:27.052+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:61f000 2015-04-01T16:21:27.052+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.052+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.054+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.055+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.059+0000 D STORAGE [repl writer worker 14] allocating new extent 2015-04-01T16:21:27.059+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:27.059+0000 I INDEX [repl writer worker 14] build index on: Tests04011621.testcollection properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.059+0000 I INDEX [repl writer worker 14] building index using bulk method 2015-04-01T16:21:27.059+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.059+0000 D INDEX [repl writer worker 14] bulk commit starting for index: x_1 2015-04-01T16:21:27.059+0000 D INDEX [repl writer worker 14] done building bottom layer, going to commit 2015-04-01T16:21:27.059+0000 I INDEX [repl writer worker 14] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:27.059+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.059+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.059+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.066+0000 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:27.066+0000 D COMMAND [conn14] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:27.066+0000 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:27.088+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.089+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.089+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(0, ) } 2015-04-01T16:21:27.089+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.091+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.092+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.092+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(0, 010203) } 2015-04-01T16:21:27.092+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.095+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.095+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.095+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(0, ) } 2015-04-01T16:21:27.095+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(0, 010203) } 2015-04-01T16:21:27.095+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.097+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.098+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.099+0000 D QUERY [repl writer worker 14] Using idhack: { _id: false } 2015-04-01T16:21:27.100+0000 D QUERY [repl writer worker 14] Using idhack: { _id: true } 2015-04-01T16:21:27.100+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.104+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.106+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:27.106+0000 D QUERY [repl writer worker 14] Using idhack: { _id: false } 2015-04-01T16:21:27.107+0000 D QUERY [repl writer worker 14] Using idhack: { _id: true } 2015-04-01T16:21:27.107+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(-62135596800000) } 2015-04-01T16:21:27.107+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(1427905287102) } 2015-04-01T16:21:27.107+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(253402300799999) } 2015-04-01T16:21:27.107+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.110+0000 D REPL [rsBackgroundSync] bgsync buffer has 214 bytes 2015-04-01T16:21:27.110+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.111+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:27.111+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(-62135596800000) } 2015-04-01T16:21:27.111+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(1427905287102) } 2015-04-01T16:21:27.112+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(253402300799999) } 2015-04-01T16:21:27.112+0000 D QUERY [repl writer worker 14] Using idhack: { _id: {} } 2015-04-01T16:21:27.112+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.114+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.114+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.115+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { A: 1, B: 2 } } 2015-04-01T16:21:27.115+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.117+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.117+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.117+0000 D QUERY [repl writer worker 14] Using idhack: { _id: {} } 2015-04-01T16:21:27.117+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { A: 1, B: 2 } } 2015-04-01T16:21:27.118+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|24, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.120+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.120+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.121+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0.0 } 2015-04-01T16:21:27.121+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1.0 } 2015-04-01T16:21:27.122+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|26, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.123+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.123+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.124+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0.0 } 2015-04-01T16:21:27.124+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1.0 } 2015-04-01T16:21:27.124+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|28, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.127+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.127+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.127+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.127+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.128+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|30, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.130+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.130+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.130+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.130+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.130+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|32, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.133+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.133+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.134+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.134+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.134+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|34, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.138+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:27.138+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.138+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.138+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.139+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.139+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|36, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.142+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.142+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.142+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.142+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: MaxKey } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.142+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|37, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.144+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.144+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.144+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.144+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: MaxKey } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.145+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|38, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.148+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.148+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.148+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.148+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: MinKey } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.149+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|39, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.151+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.151+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.151+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.151+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: MinKey } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.152+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|40, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.155+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.155+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.155+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.155+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: null } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.155+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|41, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.160+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.160+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.160+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.160+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: null } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.161+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.161+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: null } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.161+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bea7') } 2015-04-01T16:21:27.161+0000 D QUERY [rsSync] local.oplog.rs: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:27.162+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|44, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.163+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.164+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.164+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bea8') } 2015-04-01T16:21:27.165+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|45, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.167+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.167+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.167+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.167+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: null } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.168+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bea7') } 2015-04-01T16:21:27.168+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bea8') } 2015-04-01T16:21:27.168+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|48, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.170+0000 D REPL [rsBackgroundSync] bgsync buffer has 107 bytes 2015-04-01T16:21:27.171+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.172+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.172+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "" } 2015-04-01T16:21:27.172+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "123" } 2015-04-01T16:21:27.173+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|50, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.176+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.176+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.177+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "" } 2015-04-01T16:21:27.177+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "123" } 2015-04-01T16:21:27.177+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.177+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: Timestamp 1000|2 } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.178+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|53, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.181+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:27.181+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:21:27.181+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:27.181+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:29.181Z 2015-04-01T16:21:27.183+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.184+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.184+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.184+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: Timestamp 1000|2 } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.184+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bea9') } 2015-04-01T16:21:27.184+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|55, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.186+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.186+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.186+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bea9') } 2015-04-01T16:21:27.187+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|56, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.189+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.190+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.190+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.190+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: null } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.190+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|57, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.195+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.196+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.196+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.196+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: null } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.196+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|58, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.201+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.202+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.202+0000 D QUERY [repl writer worker 14] Using idhack: { _id: false } 2015-04-01T16:21:27.202+0000 D QUERY [repl writer worker 14] Using idhack: { _id: true } 2015-04-01T16:21:27.202+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|60, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.207+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.207+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.207+0000 D QUERY [repl writer worker 14] Using idhack: { _id: false } 2015-04-01T16:21:27.207+0000 D QUERY [repl writer worker 14] Using idhack: { _id: true } 2015-04-01T16:21:27.208+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|62, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.210+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.211+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.211+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.211+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|63, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.216+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.216+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.216+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.216+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|64, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.220+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:27.220+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.220+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.220+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.221+0000 D QUERY [repl writer worker 14] Using idhack: { _id: false } 2015-04-01T16:21:27.222+0000 D QUERY [repl writer worker 14] Using idhack: { _id: true } 2015-04-01T16:21:27.222+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|67, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.227+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.228+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:27.229+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.229+0000 D QUERY [repl writer worker 14] Using idhack: { _id: false } 2015-04-01T16:21:27.229+0000 D QUERY [repl writer worker 14] Using idhack: { _id: true } 2015-04-01T16:21:27.229+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.229+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|71, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.231+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.231+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.231+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(-62135596800000) } 2015-04-01T16:21:27.231+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|72, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.234+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.236+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.236+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(1427905287230) } 2015-04-01T16:21:27.236+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(253402300799999) } 2015-04-01T16:21:27.237+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|74, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.240+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.240+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:27.241+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.241+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(-62135596800000) } 2015-04-01T16:21:27.241+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(1427905287230) } 2015-04-01T16:21:27.241+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(253402300799999) } 2015-04-01T16:21:27.241+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|78, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.243+0000 D REPL [rsBackgroundSync] bgsync buffer has 121 bytes 2015-04-01T16:21:27.243+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.244+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.244+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.244+0000 D QUERY [repl writer worker 14] Using idhack: { _id: {} } 2015-04-01T16:21:27.244+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|80, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.246+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.248+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.248+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { A: 1, B: 2 } } 2015-04-01T16:21:27.248+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|81, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.250+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.251+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.251+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.251+0000 D QUERY [repl writer worker 14] Using idhack: { _id: {} } 2015-04-01T16:21:27.251+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { A: 1, B: 2 } } 2015-04-01T16:21:27.251+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|84, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.253+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.253+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.254+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.254+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|85, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.256+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.256+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.257+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0.0 } 2015-04-01T16:21:27.257+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1.0 } 2015-04-01T16:21:27.257+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|87, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.262+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.263+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:27.263+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.263+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0.0 } 2015-04-01T16:21:27.263+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1.0 } 2015-04-01T16:21:27.263+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.264+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|91, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.265+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.266+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.266+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.266+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.267+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|93, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.271+0000 D REPL [rsBackgroundSync] bgsync buffer has 118 bytes 2015-04-01T16:21:27.271+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.272+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.272+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.272+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.272+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.272+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|96, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.275+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.275+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.275+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.276+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|97, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.278+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.278+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.278+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.279+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.279+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|99, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.281+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.281+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.282+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.282+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.282+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.282+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|102, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.285+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.286+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.286+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.286+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|103, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.288+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.288+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.289+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.289+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: MaxKey } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.289+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|104, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.291+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.292+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.292+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.292+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.292+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: MaxKey } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.292+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|106, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.295+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.296+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.296+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.297+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.297+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: MinKey } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.297+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|108, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.301+0000 D REPL [rsBackgroundSync] bgsync buffer has 118 bytes 2015-04-01T16:21:27.301+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.301+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.301+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.302+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.302+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: MinKey } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.302+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|110, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.305+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.305+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.305+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.305+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.306+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: null } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.306+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|112, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.310+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.310+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.310+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.311+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.311+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: null } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.311+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|114, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.313+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.314+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.314+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beaa') } 2015-04-01T16:21:27.314+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|115, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.317+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.317+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.317+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beab') } 2015-04-01T16:21:27.318+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beac') } 2015-04-01T16:21:27.319+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|117, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.321+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.321+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.321+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beaa') } 2015-04-01T16:21:27.321+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beab') } 2015-04-01T16:21:27.321+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beac') } 2015-04-01T16:21:27.322+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|120, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.325+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.326+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.326+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.326+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "" } 2015-04-01T16:21:27.326+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|122, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.328+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.329+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.329+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "123" } 2015-04-01T16:21:27.330+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|123, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.331+0000 D REPL [rsBackgroundSync] bgsync buffer has 118 bytes 2015-04-01T16:21:27.332+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.332+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.332+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.332+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "" } 2015-04-01T16:21:27.333+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "123" } 2015-04-01T16:21:27.334+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|126, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.337+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.338+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.338+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.339+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.339+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: Timestamp 1000|2 } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.339+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|128, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.349+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.350+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.350+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.351+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|129, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.353+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.353+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.353+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.353+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: Timestamp 1000|2 } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.353+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|130, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.355+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.356+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.356+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.357+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|131, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.358+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.359+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.359+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.359+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|132, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.361+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.362+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.362+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.363+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(0, ) } 2015-04-01T16:21:27.363+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(0, 010203) } 2015-04-01T16:21:27.363+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|135, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.364+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.365+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:27.365+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.365+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(0, ) } 2015-04-01T16:21:27.365+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(0, 010203) } 2015-04-01T16:21:27.366+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.366+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|139, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.368+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:27.368+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.370+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.370+0000 D QUERY [repl writer worker 14] Using idhack: { _id: false } 2015-04-01T16:21:27.370+0000 D QUERY [repl writer worker 14] Using idhack: { _id: true } 2015-04-01T16:21:27.371+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.372+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.372+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|142, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.374+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:27.375+0000 D QUERY [repl writer worker 14] Using idhack: { _id: false } 2015-04-01T16:21:27.375+0000 D QUERY [repl writer worker 14] Using idhack: { _id: true } 2015-04-01T16:21:27.375+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.375+0000 D QUERY [repl writer worker 14] Using idhack: { _id: {} } 2015-04-01T16:21:27.376+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|146, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.376+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.378+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:27.379+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { A: 1, B: 2 } } 2015-04-01T16:21:27.379+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.379+0000 D QUERY [repl writer worker 14] Using idhack: { _id: {} } 2015-04-01T16:21:27.379+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { A: 1, B: 2 } } 2015-04-01T16:21:27.379+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.380+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|151, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.380+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.380+0000 D REPL [rsBackgroundSync] bgsync buffer has 110 bytes 2015-04-01T16:21:27.381+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.383+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(-62135596800000) } 2015-04-01T16:21:27.383+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(1427905287376) } 2015-04-01T16:21:27.383+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|153, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.383+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.384+0000 D REPL [rsSync] replication batch size is 8 2015-04-01T16:21:27.384+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(253402300799999) } 2015-04-01T16:21:27.384+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.385+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(-62135596800000) } 2015-04-01T16:21:27.385+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(1427905287376) } 2015-04-01T16:21:27.385+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(253402300799999) } 2015-04-01T16:21:27.385+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.385+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0.0 } 2015-04-01T16:21:27.385+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1.0 } 2015-04-01T16:21:27.388+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|161, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.388+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.390+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:27.390+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.390+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0.0 } 2015-04-01T16:21:27.390+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1.0 } 2015-04-01T16:21:27.390+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.392+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.392+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:27.393+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|165, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.393+0000 D REPL [rsBackgroundSync] bgsync buffer has 221 bytes 2015-04-01T16:21:27.394+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:21:27.395+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:27.395+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.395+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:29.395Z 2015-04-01T16:21:27.396+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.396+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.397+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|167, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.397+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.399+0000 D REPL [rsSync] replication batch size is 6 2015-04-01T16:21:27.399+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.399+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.399+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.399+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.400+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.400+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.400+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|173, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.400+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.401+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:27.401+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.402+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.402+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.402+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.402+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.402+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: MaxKey } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.403+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|178, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.403+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.403+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.404+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.404+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.404+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: MaxKey } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.405+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.405+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|180, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.405+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.405+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.406+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|181, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.407+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.408+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.409+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.409+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: MinKey } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.409+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|182, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.410+0000 D REPL [rsBackgroundSync] bgsync buffer has 217 bytes 2015-04-01T16:21:27.411+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.412+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:27.412+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.412+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.412+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: MinKey } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.412+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.412+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.413+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: null } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.413+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|186, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.413+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.415+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.415+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.415+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.415+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: null } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.415+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.416+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|189, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.416+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.417+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.418+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('000000000000000000000000') } 2015-04-01T16:21:27.418+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bead') } 2015-04-01T16:21:27.418+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|191, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.420+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.420+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.420+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.421+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('000000000000000000000000') } 2015-04-01T16:21:27.421+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bead') } 2015-04-01T16:21:27.422+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|194, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.423+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.424+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.424+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.424+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "" } 2015-04-01T16:21:27.424+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "123" } 2015-04-01T16:21:27.425+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|197, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.426+0000 D REPL [rsBackgroundSync] bgsync buffer has 222 bytes 2015-04-01T16:21:27.427+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.428+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:27.428+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.428+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "" } 2015-04-01T16:21:27.428+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "123" } 2015-04-01T16:21:27.428+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.429+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|201, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.430+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.431+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.431+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.431+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: Timestamp 1000|2 } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.431+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|202, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.433+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.433+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.433+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.434+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.434+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: Timestamp 1000|2 } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.434+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|204, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.439+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.439+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.440+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(-62135596800000) } 2015-04-01T16:21:27.440+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(1427905287437) } 2015-04-01T16:21:27.440+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(253402300799999) } 2015-04-01T16:21:27.440+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|207, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.443+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.443+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.443+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(-62135596800000) } 2015-04-01T16:21:27.443+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(1427905287437) } 2015-04-01T16:21:27.444+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|209, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.445+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.445+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.445+0000 D QUERY [repl writer worker 14] Using idhack: { _id: new Date(253402300799999) } 2015-04-01T16:21:27.446+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|210, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.448+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.448+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.448+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0.0 } 2015-04-01T16:21:27.449+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|211, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.452+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.452+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.453+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1.0 } 2015-04-01T16:21:27.453+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|212, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.455+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.455+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.455+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0.0 } 2015-04-01T16:21:27.456+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1.0 } 2015-04-01T16:21:27.456+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|214, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.457+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:27.457+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.458+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.459+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.459+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.459+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|216, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.463+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.463+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.463+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.463+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.464+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|218, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.466+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.466+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.467+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.467+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.468+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|220, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.472+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.472+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.472+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 0 } 2015-04-01T16:21:27.472+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.474+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|222, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.476+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.476+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.477+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beae') } 2015-04-01T16:21:27.477+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beaf') } 2015-04-01T16:21:27.477+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|224, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.481+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.481+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.481+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beae') } 2015-04-01T16:21:27.481+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beaf') } 2015-04-01T16:21:27.482+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|226, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.484+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.485+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.486+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.486+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: null } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.486+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "" } 2015-04-01T16:21:27.487+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|228, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.493+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.494+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:27.495+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.496+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "123" } 2015-04-01T16:21:27.496+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|229, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.497+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.498+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.499+0000 D QUERY [repl writer worker 14] Relevant index 0 is kp: { _id: 1 } io: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.499+0000 D QUERY [repl writer worker 14] Only one plan is available; it will be run but will not be cached. query: { _id: null } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { _id: 1 } 2015-04-01T16:21:27.500+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "" } 2015-04-01T16:21:27.500+0000 D QUERY [repl writer worker 14] Using idhack: { _id: "123" } 2015-04-01T16:21:27.501+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|232, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.501+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.502+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.503+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.504+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(0, ) } 2015-04-01T16:21:27.504+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|234, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.504+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.505+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.505+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(0, 010203) } 2015-04-01T16:21:27.505+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|235, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.519+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.519+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.519+0000 D QUERY [repl writer worker 14] Using idhack: { _id: { _csharpnull: true } } 2015-04-01T16:21:27.520+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|236, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.522+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.523+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.524+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(0, ) } 2015-04-01T16:21:27.524+0000 D QUERY [repl writer worker 14] Using idhack: { _id: BinData(0, 010203) } 2015-04-01T16:21:27.524+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|238, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.529+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.529+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.530+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beb0') } 2015-04-01T16:21:27.530+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|239, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.540+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.540+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.541+0000 D STORAGE [repl writer worker 14] allocating new extent 2015-04-01T16:21:27.541+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:63f000 2015-04-01T16:21:27.541+0000 I INDEX [repl writer worker 14] build index on: Tests04011621.testcollection properties: { v: 1, key: { a.b: 1 }, name: "a.b_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.541+0000 I INDEX [repl writer worker 14] building index using bulk method 2015-04-01T16:21:27.542+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.542+0000 D INDEX [repl writer worker 14] bulk commit starting for index: a.b_1 2015-04-01T16:21:27.542+0000 D INDEX [repl writer worker 14] done building bottom layer, going to commit 2015-04-01T16:21:27.542+0000 I INDEX [repl writer worker 14] build index done. scanned 1 total records. 0 secs 2015-04-01T16:21:27.542+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.542+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.542+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|240, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.548+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.548+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.549+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beb2') } 2015-04-01T16:21:27.550+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|241, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.561+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.562+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.562+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beb7') } 2015-04-01T16:21:27.563+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|242, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.570+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.571+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.571+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beb0') } 2015-04-01T16:21:27.571+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|243, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.573+0000 D REPL [rsBackgroundSync] bgsync buffer has 111 bytes 2015-04-01T16:21:27.573+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.575+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:27.575+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beb2') } 2015-04-01T16:21:27.575+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beb7') } 2015-04-01T16:21:27.575+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beb8') } 2015-04-01T16:21:27.576+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|246, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.584+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.584+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.585+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beb8') } 2015-04-01T16:21:27.585+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|247, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.586+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.587+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.587+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beb9') } 2015-04-01T16:21:27.588+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|248, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.592+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.593+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.593+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beb9') } 2015-04-01T16:21:27.593+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beba') } 2015-04-01T16:21:27.593+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|250, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.598+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.599+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.599+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452beba') } 2015-04-01T16:21:27.599+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bebb') } 2015-04-01T16:21:27.599+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|252, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.606+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.607+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.607+0000 D COMMAND [repl writer worker 14] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:27.607+0000 I COMMAND [repl writer worker 14] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:27.607+0000 D STORAGE [repl writer worker 14] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:27.607+0000 D INDEX [repl writer worker 14] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.607+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.607+0000 D INDEX [repl writer worker 14] dropAllIndexes dropping: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.607+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.608+0000 D INDEX [repl writer worker 14] dropAllIndexes dropping: { v: 1, key: { a.b: 1 }, name: "a.b_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.608+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.608+0000 D STORAGE [repl writer worker 14] dropIndexes done 2015-04-01T16:21:27.608+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|253, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.620+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.621+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.622+0000 D COMMAND [repl writer worker 14] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:27.622+0000 D STORAGE [repl writer worker 14] create collection Tests04011621.testcollection {} 2015-04-01T16:21:27.622+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:27.622+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.622+0000 D STORAGE [repl writer worker 14] allocating new extent 2015-04-01T16:21:27.623+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:63f000 2015-04-01T16:21:27.623+0000 D STORAGE [repl writer worker 14] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.623+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|254, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.623+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.624+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.624+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.624+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|255, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.632+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.633+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.633+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.633+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|256, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.635+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.636+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.636+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.636+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|257, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.653+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.653+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.654+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.654+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|258, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.656+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.657+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.657+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.657+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|259, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.672+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:27.672+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.673+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.673+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.673+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|260, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.689+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.689+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.690+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.691+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|261, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.695+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.696+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.696+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.696+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.696+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|263, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.712+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.713+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.713+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.713+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|264, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.715+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.716+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.716+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.716+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|265, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.721+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.721+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.721+0000 D QUERY [repl writer worker 14] Using idhack: { _id: 1 } 2015-04-01T16:21:27.722+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|266, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.737+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.737+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.737+0000 D COMMAND [repl writer worker 14] run command Tests04011621.$cmd { drop: "fs.files" } 2015-04-01T16:21:27.737+0000 I COMMAND [repl writer worker 14] CMD: drop Tests04011621.fs.files 2015-04-01T16:21:27.737+0000 D STORAGE [repl writer worker 14] dropCollection: Tests04011621.fs.files 2015-04-01T16:21:27.738+0000 D INDEX [repl writer worker 14] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.fs.files" } 2015-04-01T16:21:27.738+0000 D STORAGE [repl writer worker 14] Tests04011621.fs.files: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.738+0000 D INDEX [repl writer worker 14] dropAllIndexes dropping: { v: 1, key: { filename: 1, uploadDate: 1 }, name: "filename_1_uploadDate_1", ns: "Tests04011621.fs.files" } 2015-04-01T16:21:27.738+0000 D STORAGE [repl writer worker 14] Tests04011621.fs.files: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.738+0000 D STORAGE [repl writer worker 14] dropIndexes done 2015-04-01T16:21:27.738+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|267, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.741+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.742+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.742+0000 D COMMAND [repl writer worker 14] run command Tests04011621.$cmd { drop: "fs.chunks" } 2015-04-01T16:21:27.742+0000 I COMMAND [repl writer worker 14] CMD: drop Tests04011621.fs.chunks 2015-04-01T16:21:27.742+0000 D STORAGE [repl writer worker 14] dropCollection: Tests04011621.fs.chunks 2015-04-01T16:21:27.742+0000 D INDEX [repl writer worker 14] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.fs.chunks" } 2015-04-01T16:21:27.742+0000 D STORAGE [repl writer worker 14] Tests04011621.fs.chunks: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.742+0000 D INDEX [repl writer worker 14] dropAllIndexes dropping: { v: 1, unique: true, key: { files_id: 1, n: 1 }, name: "files_id_1_n_1", ns: "Tests04011621.fs.chunks" } 2015-04-01T16:21:27.742+0000 D STORAGE [repl writer worker 14] Tests04011621.fs.chunks: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.742+0000 D STORAGE [repl writer worker 14] dropIndexes done 2015-04-01T16:21:27.743+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|268, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.747+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.747+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.748+0000 D COMMAND [repl writer worker 14] run command Tests04011621.$cmd { create: "fs.files" } 2015-04-01T16:21:27.748+0000 D STORAGE [repl writer worker 14] create collection Tests04011621.fs.files {} 2015-04-01T16:21:27.748+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:1dd000 2015-04-01T16:21:27.748+0000 D STORAGE [repl writer worker 14] Tests04011621.fs.files: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.748+0000 D STORAGE [repl writer worker 14] allocating new extent 2015-04-01T16:21:27.748+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:1ff000 2015-04-01T16:21:27.748+0000 D STORAGE [repl writer worker 14] Tests04011621.fs.files: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.748+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.749+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.749+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|269, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.749+0000 D STORAGE [repl writer worker 14] allocating new extent 2015-04-01T16:21:27.749+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:1df000 2015-04-01T16:21:27.749+0000 I INDEX [repl writer worker 14] build index on: Tests04011621.fs.files properties: { v: 1, key: { filename: 1, uploadDate: 1 }, name: "filename_1_uploadDate_1", ns: "Tests04011621.fs.files" } 2015-04-01T16:21:27.749+0000 I INDEX [repl writer worker 14] building index using bulk method 2015-04-01T16:21:27.749+0000 D STORAGE [repl writer worker 14] Tests04011621.fs.files: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.749+0000 D INDEX [repl writer worker 14] bulk commit starting for index: filename_1_uploadDate_1 2015-04-01T16:21:27.750+0000 D INDEX [repl writer worker 14] done building bottom layer, going to commit 2015-04-01T16:21:27.750+0000 I INDEX [repl writer worker 14] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:27.750+0000 D STORAGE [repl writer worker 14] Tests04011621.fs.files: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.750+0000 D STORAGE [repl writer worker 14] Tests04011621.fs.files: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.750+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|270, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.751+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.751+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.751+0000 D COMMAND [repl writer worker 14] run command Tests04011621.$cmd { create: "fs.chunks" } 2015-04-01T16:21:27.752+0000 D STORAGE [repl writer worker 14] create collection Tests04011621.fs.chunks {} 2015-04-01T16:21:27.752+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:1b3000 2015-04-01T16:21:27.752+0000 D STORAGE [repl writer worker 14] Tests04011621.fs.chunks: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.752+0000 D STORAGE [repl writer worker 14] allocating new extent 2015-04-01T16:21:27.752+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:1bd000 2015-04-01T16:21:27.752+0000 D STORAGE [repl writer worker 14] Tests04011621.fs.chunks: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.752+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|271, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.753+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.753+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.754+0000 D STORAGE [repl writer worker 14] allocating new extent 2015-04-01T16:21:27.754+0000 D STORAGE [repl writer worker 14] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:10f000 2015-04-01T16:21:27.754+0000 I INDEX [repl writer worker 14] build index on: Tests04011621.fs.chunks properties: { v: 1, unique: true, key: { files_id: 1, n: 1 }, name: "files_id_1_n_1", ns: "Tests04011621.fs.chunks" } 2015-04-01T16:21:27.754+0000 I INDEX [repl writer worker 14] building index using bulk method 2015-04-01T16:21:27.755+0000 D STORAGE [repl writer worker 14] Tests04011621.fs.chunks: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.755+0000 D INDEX [repl writer worker 14] bulk commit starting for index: files_id_1_n_1 2015-04-01T16:21:27.755+0000 D INDEX [repl writer worker 14] done building bottom layer, going to commit 2015-04-01T16:21:27.755+0000 I INDEX [repl writer worker 14] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:27.755+0000 D STORAGE [repl writer worker 14] Tests04011621.fs.chunks: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.755+0000 D STORAGE [repl writer worker 14] Tests04011621.fs.chunks: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.755+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|272, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.755+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.756+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.756+0000 D QUERY [repl writer worker 14] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bebd') } 2015-04-01T16:21:27.756+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bebc') } 2015-04-01T16:21:27.757+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|274, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.829+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:27.829+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.830+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.830+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:27.830+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:27.830+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:27.830+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.830+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.831+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:27.831+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|275, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.835+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.836+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.836+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:27.836+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:27.836+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:27.836+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.836+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:27.836+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:63f000 2015-04-01T16:21:27.836+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.837+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.837+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|276, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.837+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.838+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bebe') } 2015-04-01T16:21:27.838+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|277, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.838+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.839+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.839+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bebe') } 2015-04-01T16:21:27.839+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|278, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.844+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.844+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.844+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bebe') } 2015-04-01T16:21:27.845+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|279, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.847+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.847+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:27.848+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bebf') } 2015-04-01T16:21:27.848+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bebf') } 2015-04-01T16:21:27.848+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|281, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.854+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.854+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.854+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:27.854+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:27.854+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:27.854+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.854+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.854+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:27.855+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|282, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.856+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.856+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.858+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:27.858+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:27.858+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:27.858+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.858+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:27.858+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:63f000 2015-04-01T16:21:27.858+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.858+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|283, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.859+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.860+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.860+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b07e15b5605d452bec0') } 2015-04-01T16:21:27.860+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|284, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:27.949+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:27.949+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:27.951+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:27.951+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:27.951+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:27.951+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:27.951+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:27.951+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:27.951+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905287000|285, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.011+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.012+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.013+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:28.013+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:28.013+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:28.013+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.013+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.013+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:63f000 2015-04-01T16:21:28.013+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.013+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.014+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.015+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.015+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:28.016+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.024+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.024+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.024+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:28.024+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:28.024+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:28.024+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:28.024+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.025+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:28.025+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.027+0000 D REPL [rsBackgroundSync] bgsync buffer has 107 bytes 2015-04-01T16:21:28.027+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.028+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.028+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:28.028+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:28.028+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:28.028+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.028+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.028+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:63f000 2015-04-01T16:21:28.028+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.029+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.029+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.031+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.031+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.031+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:28.031+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { A: 1, _id: 1 }, name: "A_1__id_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:28.031+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:28.031+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.032+0000 D INDEX [repl writer worker 15] bulk commit starting for index: A_1__id_1 2015-04-01T16:21:28.032+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:28.032+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:28.032+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.032+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.033+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.033+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.034+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.035+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:28.035+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:28.035+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:28.035+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.074+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.075+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.076+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:28.076+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:28.076+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:28.076+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:28.076+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.076+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { A: 1, _id: 1 }, name: "A_1__id_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:28.076+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.076+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:28.077+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.080+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.080+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.081+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:28.081+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:28.081+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:28.081+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.081+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.081+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:28.081+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.082+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.084+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.085+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.085+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08e15b5605d452bec1') } 2015-04-01T16:21:28.086+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.087+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.087+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.088+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08e15b5605d452bec1') } 2015-04-01T16:21:28.088+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.096+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.096+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.096+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "test" } 2015-04-01T16:21:28.096+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.test {} 2015-04-01T16:21:28.096+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:65f000 2015-04-01T16:21:28.097+0000 D STORAGE [repl writer worker 15] Tests04011621.test: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.097+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.097+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:63f000 2015-04-01T16:21:28.097+0000 D STORAGE [repl writer worker 15] Tests04011621.test: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.097+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.099+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.099+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.100+0000 D QUERY [repl writer worker 15] Using idhack: { _id: BinData(4, 00112233445566778899AABBCCDDEEFF) } 2015-04-01T16:21:28.100+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|14, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.102+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.103+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.103+0000 D QUERY [repl writer worker 15] Using idhack: { _id: BinData(4, 00112233445566778899AABBCCDDEEFF) } 2015-04-01T16:21:28.103+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.114+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.115+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.115+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:28.115+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:28.115+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:28.115+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:28.115+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.115+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:28.116+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.120+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.120+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.121+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:28.121+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:28.121+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:28.121+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.121+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.121+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:28.121+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.121+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.121+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.122+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.123+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08e15b5605d452bec2') } 2015-04-01T16:21:28.124+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.176+0000 D REPL [rsBackgroundSync] bgsync buffer has 101 bytes 2015-04-01T16:21:28.177+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.177+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.177+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "castTest" } 2015-04-01T16:21:28.178+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.castTest {} 2015-04-01T16:21:28.178+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:661000 2015-04-01T16:21:28.178+0000 D STORAGE [repl writer worker 15] Tests04011621.castTest: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.178+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.179+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:61f000 2015-04-01T16:21:28.179+0000 D STORAGE [repl writer worker 15] Tests04011621.castTest: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.180+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.180+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.180+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.181+0000 D QUERY [repl writer worker 15] Using idhack: { _id: BinData(3, 0FFD2FEF4EE5D7418B082000917A8969) } 2015-04-01T16:21:28.181+0000 D QUERY [repl writer worker 15] Using idhack: { _id: BinData(3, 996B9A4D148459439A8AEDBC5EFCD47E) } 2015-04-01T16:21:28.182+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.233+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.233+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.235+0000 I INDEX [repl writer worker 15] allocating new ns file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\csharp475.ns, filling with zeroes... 2015-04-01T16:21:28.248+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\csharp475.0, filling with zeroes... 2015-04-01T16:21:28.251+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\csharp475.0, size: 16MB, took 0.002 secs 2015-04-01T16:21:28.255+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.255+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:4096 fromFreeList: 0 eloc: 0:4000 2015-04-01T16:21:28.256+0000 D STORAGE [repl writer worker 15] csharp475.system.indexes: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.256+0000 D STORAGE [repl writer worker 15] csharp475.system.namespaces: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.256+0000 D COMMAND [repl writer worker 15] run command csharp475.$cmd { create: "ProjectTest" } 2015-04-01T16:21:28.256+0000 D STORAGE [repl writer worker 15] create collection csharp475.ProjectTest {} 2015-04-01T16:21:28.256+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:5000 2015-04-01T16:21:28.256+0000 D STORAGE [repl writer worker 15] csharp475.ProjectTest: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.256+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.256+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:7000 2015-04-01T16:21:28.256+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.256+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:9000 2015-04-01T16:21:28.257+0000 D STORAGE [repl writer worker 15] csharp475.ProjectTest: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.257+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.258+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.259+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.259+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08e15b5605d452bec3') } 2015-04-01T16:21:28.260+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|23, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.374+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.374+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.375+0000 D COMMAND [repl writer worker 15] run command csharp475.$cmd { dropDatabase: 1 } 2015-04-01T16:21:28.375+0000 I COMMAND [repl writer worker 15] dropDatabase csharp475 starting 2015-04-01T16:21:28.375+0000 D STORAGE [repl writer worker 15] dropDatabase csharp475 2015-04-01T16:21:28.494+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:28.495+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:28.498+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:28.503+0000 I JOURNAL [repl writer worker 15] journalCleanup... 2015-04-01T16:21:28.503+0000 I JOURNAL [repl writer worker 15] removeJournalFiles 2015-04-01T16:21:28.504+0000 D JOURNAL [repl writer worker 15] removeJournalFiles end 2015-04-01T16:21:28.505+0000 D STORAGE [repl writer worker 15] remove file D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\csharp475.ns 2015-04-01T16:21:28.506+0000 I COMMAND [repl writer worker 15] dropDatabase csharp475 finished 2015-04-01T16:21:28.507+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.508+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|24, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.509+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.509+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:28.509+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:28.509+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:28.509+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:28.509+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.510+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:28.510+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.510+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|25, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.511+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.511+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:28.511+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:28.511+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:28.511+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.512+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.512+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:28.512+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.512+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.512+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|26, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.514+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.514+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:28.514+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:28.514+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:28.515+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.515+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|29, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.516+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.516+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:28.516+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:28.516+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:28.516+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:28.516+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.516+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:28.516+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.517+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|30, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.518+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.518+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:28.518+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:28.518+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:28.519+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.519+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.519+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:28.519+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.520+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.520+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|31, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.522+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.522+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:28.522+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:28.523+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|33, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.594+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.594+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.595+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:28.595+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:28.596+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:28.596+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:28.596+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.596+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:28.596+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|34, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.600+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:28.600+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.601+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.601+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:28.601+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:28.601+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:28.601+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.601+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.601+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:28.602+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.602+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.602+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|35, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.602+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.603+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169cffa') } 2015-04-01T16:21:28.603+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169cffb') } 2015-04-01T16:21:28.604+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169cffc') } 2015-04-01T16:21:28.604+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.604+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|38, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.604+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.605+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169cffd') } 2015-04-01T16:21:28.605+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|39, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.607+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.609+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.610+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169cffe') } 2015-04-01T16:21:28.610+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169cfff') } 2015-04-01T16:21:28.611+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|41, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.612+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.613+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:28.613+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d000') } 2015-04-01T16:21:28.613+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d001') } 2015-04-01T16:21:28.614+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d002') } 2015-04-01T16:21:28.614+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d003') } 2015-04-01T16:21:28.614+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|45, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.615+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.616+0000 D JOURNAL [journal writer] lsn set 83867 2015-04-01T16:21:28.616+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.616+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d004') } 2015-04-01T16:21:28.616+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d005') } 2015-04-01T16:21:28.617+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|47, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.618+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.619+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.619+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d006') } 2015-04-01T16:21:28.620+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d007') } 2015-04-01T16:21:28.620+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:28.621+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.621+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|49, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.622+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.622+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d008') } 2015-04-01T16:21:28.622+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d009') } 2015-04-01T16:21:28.624+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|51, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.624+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.625+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.626+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d00a') } 2015-04-01T16:21:28.626+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d00b') } 2015-04-01T16:21:28.626+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d00c') } 2015-04-01T16:21:28.626+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|54, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.629+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.629+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.630+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d00d') } 2015-04-01T16:21:28.630+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d00e') } 2015-04-01T16:21:28.630+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d00f') } 2015-04-01T16:21:28.630+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|57, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.631+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.632+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.632+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d010') } 2015-04-01T16:21:28.632+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d011') } 2015-04-01T16:21:28.632+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d012') } 2015-04-01T16:21:28.634+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|60, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.635+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.636+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.636+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d013') } 2015-04-01T16:21:28.636+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d014') } 2015-04-01T16:21:28.636+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d015') } 2015-04-01T16:21:28.636+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|63, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.638+0000 D REPL [rsBackgroundSync] bgsync buffer has 125 bytes 2015-04-01T16:21:28.638+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.639+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.640+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d016') } 2015-04-01T16:21:28.640+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d017') } 2015-04-01T16:21:28.640+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d018') } 2015-04-01T16:21:28.641+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|66, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.642+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.643+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.643+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d019') } 2015-04-01T16:21:28.643+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d01a') } 2015-04-01T16:21:28.644+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d01b') } 2015-04-01T16:21:28.644+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|69, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.646+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.647+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.647+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d01c') } 2015-04-01T16:21:28.647+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d01d') } 2015-04-01T16:21:28.647+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d01e') } 2015-04-01T16:21:28.647+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|72, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.649+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.650+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.650+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d01f') } 2015-04-01T16:21:28.650+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d020') } 2015-04-01T16:21:28.651+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d021') } 2015-04-01T16:21:28.651+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|75, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.653+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.654+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.654+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d022') } 2015-04-01T16:21:28.654+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d023') } 2015-04-01T16:21:28.654+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d024') } 2015-04-01T16:21:28.655+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|78, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.656+0000 D REPL [rsBackgroundSync] bgsync buffer has 125 bytes 2015-04-01T16:21:28.656+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.657+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.657+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d025') } 2015-04-01T16:21:28.657+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.657+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|79, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.658+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.659+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d026') } 2015-04-01T16:21:28.659+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d027') } 2015-04-01T16:21:28.659+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|81, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.660+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.661+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d028') } 2015-04-01T16:21:28.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d029') } 2015-04-01T16:21:28.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d02a') } 2015-04-01T16:21:28.661+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|84, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.662+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.664+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.664+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d02b') } 2015-04-01T16:21:28.664+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d02c') } 2015-04-01T16:21:28.664+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|86, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.665+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.666+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.667+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d02d') } 2015-04-01T16:21:28.667+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d02e') } 2015-04-01T16:21:28.667+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d02f') } 2015-04-01T16:21:28.667+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|89, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.669+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.669+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.670+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d030') } 2015-04-01T16:21:28.670+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d031') } 2015-04-01T16:21:28.670+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|91, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.671+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.671+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.671+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d032') } 2015-04-01T16:21:28.672+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|92, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.674+0000 D REPL [rsBackgroundSync] bgsync buffer has 250 bytes 2015-04-01T16:21:28.674+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.675+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.675+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d033') } 2015-04-01T16:21:28.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d034') } 2015-04-01T16:21:28.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d035') } 2015-04-01T16:21:28.676+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|95, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.677+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.678+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.679+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d036') } 2015-04-01T16:21:28.679+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d037') } 2015-04-01T16:21:28.679+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d038') } 2015-04-01T16:21:28.679+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|98, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.681+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.681+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.681+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d039') } 2015-04-01T16:21:28.681+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d03a') } 2015-04-01T16:21:28.681+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d03b') } 2015-04-01T16:21:28.682+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|101, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.683+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.684+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.684+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d03c') } 2015-04-01T16:21:28.684+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|102, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.687+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.687+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.688+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d03d') } 2015-04-01T16:21:28.688+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d03e') } 2015-04-01T16:21:28.688+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d03f') } 2015-04-01T16:21:28.688+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|105, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.690+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.691+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.691+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d040') } 2015-04-01T16:21:28.692+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d041') } 2015-04-01T16:21:28.692+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d042') } 2015-04-01T16:21:28.692+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|108, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.693+0000 D REPL [rsBackgroundSync] bgsync buffer has 125 bytes 2015-04-01T16:21:28.694+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.694+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.694+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d043') } 2015-04-01T16:21:28.695+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d044') } 2015-04-01T16:21:28.695+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d045') } 2015-04-01T16:21:28.695+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|111, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.696+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.697+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.697+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d046') } 2015-04-01T16:21:28.697+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d047') } 2015-04-01T16:21:28.698+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|113, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.699+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.700+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.700+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d048') } 2015-04-01T16:21:28.700+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d049') } 2015-04-01T16:21:28.700+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d04a') } 2015-04-01T16:21:28.701+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|116, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.702+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.703+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.703+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d04b') } 2015-04-01T16:21:28.703+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d04c') } 2015-04-01T16:21:28.703+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|118, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.705+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.706+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.706+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d04d') } 2015-04-01T16:21:28.707+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d04e') } 2015-04-01T16:21:28.707+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d04f') } 2015-04-01T16:21:28.707+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|121, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.708+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.709+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.709+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d050') } 2015-04-01T16:21:28.709+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d051') } 2015-04-01T16:21:28.710+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|123, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.711+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.711+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.711+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d052') } 2015-04-01T16:21:28.711+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|124, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.714+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:28.714+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.714+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.715+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d053') } 2015-04-01T16:21:28.716+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|125, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.719+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.719+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:28.719+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d054') } 2015-04-01T16:21:28.720+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d055') } 2015-04-01T16:21:28.720+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d056') } 2015-04-01T16:21:28.720+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d057') } 2015-04-01T16:21:28.720+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d058') } 2015-04-01T16:21:28.721+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|130, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.721+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.722+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.722+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d059') } 2015-04-01T16:21:28.722+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|131, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.725+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.725+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.725+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d05a') } 2015-04-01T16:21:28.725+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d05b') } 2015-04-01T16:21:28.726+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|133, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.727+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.728+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.728+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d05c') } 2015-04-01T16:21:28.728+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d05d') } 2015-04-01T16:21:28.728+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|135, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.754+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.755+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.755+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "csharp714" } 2015-04-01T16:21:28.755+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.csharp714 {} 2015-04-01T16:21:28.755+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:663000 2015-04-01T16:21:28.755+0000 D STORAGE [repl writer worker 15] Tests04011621.csharp714: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.756+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.756+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:665000 2015-04-01T16:21:28.756+0000 D STORAGE [repl writer worker 15] Tests04011621.csharp714: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.756+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.757+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|136, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.757+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.758+0000 D REPL [rsBackgroundSync] bgsync buffer has 242 bytes 2015-04-01T16:21:28.758+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 0 } 2015-04-01T16:21:28.758+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.759+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|137, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.760+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.760+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:28.760+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:28.760+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:28.760+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|140, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.761+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.762+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.762+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:28.762+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:28.763+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|142, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.767+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.767+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.767+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:28.768+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 7 } 2015-04-01T16:21:28.768+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 8 } 2015-04-01T16:21:28.768+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|145, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.770+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.771+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:28.771+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 9 } 2015-04-01T16:21:28.771+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:28.772+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 11 } 2015-04-01T16:21:28.772+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 12 } 2015-04-01T16:21:28.772+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|149, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.773+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.774+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.774+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 13 } 2015-04-01T16:21:28.774+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 14 } 2015-04-01T16:21:28.775+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 15 } 2015-04-01T16:21:28.775+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|152, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.776+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.776+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.776+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 16 } 2015-04-01T16:21:28.777+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|153, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.779+0000 D REPL [rsBackgroundSync] bgsync buffer has 121 bytes 2015-04-01T16:21:28.779+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.780+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.780+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 17 } 2015-04-01T16:21:28.780+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 18 } 2015-04-01T16:21:28.781+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|155, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.782+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.784+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.784+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 19 } 2015-04-01T16:21:28.784+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 20 } 2015-04-01T16:21:28.784+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 21 } 2015-04-01T16:21:28.784+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|158, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.786+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.787+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.788+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 22 } 2015-04-01T16:21:28.788+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 23 } 2015-04-01T16:21:28.788+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 24 } 2015-04-01T16:21:28.788+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|161, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.790+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.791+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 25 } 2015-04-01T16:21:28.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 26 } 2015-04-01T16:21:28.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 27 } 2015-04-01T16:21:28.793+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|164, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.793+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.795+0000 D REPL [rsBackgroundSync] bgsync buffer has 242 bytes 2015-04-01T16:21:28.795+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.796+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 28 } 2015-04-01T16:21:28.796+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 29 } 2015-04-01T16:21:28.796+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 30 } 2015-04-01T16:21:28.797+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|167, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.797+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.799+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.800+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 31 } 2015-04-01T16:21:28.800+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 32 } 2015-04-01T16:21:28.800+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 33 } 2015-04-01T16:21:28.800+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|170, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.801+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.802+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.803+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 34 } 2015-04-01T16:21:28.803+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 35 } 2015-04-01T16:21:28.804+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 36 } 2015-04-01T16:21:28.804+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|173, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.805+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.805+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:28.806+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 37 } 2015-04-01T16:21:28.806+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 38 } 2015-04-01T16:21:28.806+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 39 } 2015-04-01T16:21:28.806+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 40 } 2015-04-01T16:21:28.806+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|177, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.807+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.807+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.807+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 41 } 2015-04-01T16:21:28.807+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 42 } 2015-04-01T16:21:28.807+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 43 } 2015-04-01T16:21:28.808+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|180, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.811+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.813+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:28.813+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 44 } 2015-04-01T16:21:28.813+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 45 } 2015-04-01T16:21:28.813+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 46 } 2015-04-01T16:21:28.814+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 47 } 2015-04-01T16:21:28.814+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:28.814+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|184, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.815+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.816+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.816+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 48 } 2015-04-01T16:21:28.816+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 49 } 2015-04-01T16:21:28.817+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 50 } 2015-04-01T16:21:28.817+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|187, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.818+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.819+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.819+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 51 } 2015-04-01T16:21:28.820+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 52 } 2015-04-01T16:21:28.820+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 53 } 2015-04-01T16:21:28.820+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|190, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.821+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.822+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.822+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 54 } 2015-04-01T16:21:28.823+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 55 } 2015-04-01T16:21:28.823+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 56 } 2015-04-01T16:21:28.823+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|193, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.824+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.825+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.826+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 57 } 2015-04-01T16:21:28.826+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 58 } 2015-04-01T16:21:28.826+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 59 } 2015-04-01T16:21:28.827+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|196, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.828+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.829+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.829+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 60 } 2015-04-01T16:21:28.830+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 61 } 2015-04-01T16:21:28.830+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 62 } 2015-04-01T16:21:28.830+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:28.831+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|199, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.831+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.832+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:28.833+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 63 } 2015-04-01T16:21:28.833+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 64 } 2015-04-01T16:21:28.833+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 65 } 2015-04-01T16:21:28.833+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 66 } 2015-04-01T16:21:28.834+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|203, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.835+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.837+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:28.837+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 67 } 2015-04-01T16:21:28.838+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 68 } 2015-04-01T16:21:28.838+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 69 } 2015-04-01T16:21:28.838+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 70 } 2015-04-01T16:21:28.838+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|207, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.839+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.840+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:28.841+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 71 } 2015-04-01T16:21:28.842+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 72 } 2015-04-01T16:21:28.842+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 73 } 2015-04-01T16:21:28.842+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 74 } 2015-04-01T16:21:28.842+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|211, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.843+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.844+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:28.845+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.845+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 75 } 2015-04-01T16:21:28.846+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 76 } 2015-04-01T16:21:28.846+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 77 } 2015-04-01T16:21:28.846+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|214, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.847+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.849+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.849+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 78 } 2015-04-01T16:21:28.850+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 79 } 2015-04-01T16:21:28.850+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 80 } 2015-04-01T16:21:28.850+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|217, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.850+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.851+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:28.851+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 81 } 2015-04-01T16:21:28.851+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 82 } 2015-04-01T16:21:28.851+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 83 } 2015-04-01T16:21:28.852+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 84 } 2015-04-01T16:21:28.852+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 85 } 2015-04-01T16:21:28.852+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|222, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.856+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.856+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:28.857+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 86 } 2015-04-01T16:21:28.857+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 87 } 2015-04-01T16:21:28.857+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 88 } 2015-04-01T16:21:28.857+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 89 } 2015-04-01T16:21:28.857+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|226, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.859+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.860+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.861+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 90 } 2015-04-01T16:21:28.861+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 91 } 2015-04-01T16:21:28.861+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 92 } 2015-04-01T16:21:28.861+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|229, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.863+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:28.863+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.864+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.864+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 93 } 2015-04-01T16:21:28.865+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 94 } 2015-04-01T16:21:28.865+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 95 } 2015-04-01T16:21:28.865+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|232, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.866+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.867+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:28.868+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 96 } 2015-04-01T16:21:28.868+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 97 } 2015-04-01T16:21:28.868+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 98 } 2015-04-01T16:21:28.868+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 99 } 2015-04-01T16:21:28.868+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|236, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.870+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.871+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.871+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.871+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:685000 2015-04-01T16:21:28.871+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.csharp714 properties: { v: 1, key: { Guid: 1 }, name: "Guid_1", ns: "Tests04011621.csharp714" } 2015-04-01T16:21:28.872+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:28.872+0000 D STORAGE [repl writer worker 15] Tests04011621.csharp714: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.872+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Guid_1 2015-04-01T16:21:28.872+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:28.873+0000 I INDEX [repl writer worker 15] build index done. scanned 100 total records. 0 secs 2015-04-01T16:21:28.873+0000 D STORAGE [repl writer worker 15] Tests04011621.csharp714: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.873+0000 D STORAGE [repl writer worker 15] Tests04011621.csharp714: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.873+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|237, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.883+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.884+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.884+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169cffa') } 2015-04-01T16:21:28.884+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169cffb') } 2015-04-01T16:21:28.884+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|239, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 555 bytes 2015-04-01T16:21:28.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 2220 bytes 2015-04-01T16:21:28.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 3885 bytes 2015-04-01T16:21:28.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 5550 bytes 2015-04-01T16:21:28.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 7215 bytes 2015-04-01T16:21:28.887+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.891+0000 D REPL [rsBackgroundSync] bgsync buffer has 1110 bytes 2015-04-01T16:21:28.891+0000 D REPL [rsBackgroundSync] bgsync buffer has 2775 bytes 2015-04-01T16:21:28.899+0000 D REPL [rsSync] replication batch size is 70 2015-04-01T16:21:28.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169cffc') } 2015-04-01T16:21:28.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169cffd') } 2015-04-01T16:21:28.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169cffe') } 2015-04-01T16:21:28.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169cfff') } 2015-04-01T16:21:28.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d000') } 2015-04-01T16:21:28.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d001') } 2015-04-01T16:21:28.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d002') } 2015-04-01T16:21:28.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d003') } 2015-04-01T16:21:28.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d004') } 2015-04-01T16:21:28.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d005') } 2015-04-01T16:21:28.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d006') } 2015-04-01T16:21:28.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d007') } 2015-04-01T16:21:28.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d008') } 2015-04-01T16:21:28.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d009') } 2015-04-01T16:21:28.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d00a') } 2015-04-01T16:21:28.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d00b') } 2015-04-01T16:21:28.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d00c') } 2015-04-01T16:21:28.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d00d') } 2015-04-01T16:21:28.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d00e') } 2015-04-01T16:21:28.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d00f') } 2015-04-01T16:21:28.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d010') } 2015-04-01T16:21:28.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d011') } 2015-04-01T16:21:28.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d012') } 2015-04-01T16:21:28.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d013') } 2015-04-01T16:21:28.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d014') } 2015-04-01T16:21:28.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d015') } 2015-04-01T16:21:28.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d016') } 2015-04-01T16:21:28.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d017') } 2015-04-01T16:21:28.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d018') } 2015-04-01T16:21:28.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d019') } 2015-04-01T16:21:28.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d01a') } 2015-04-01T16:21:28.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d01b') } 2015-04-01T16:21:28.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d01c') } 2015-04-01T16:21:28.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d01d') } 2015-04-01T16:21:28.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d01e') } 2015-04-01T16:21:28.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d01f') } 2015-04-01T16:21:28.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d020') } 2015-04-01T16:21:28.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d021') } 2015-04-01T16:21:28.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d022') } 2015-04-01T16:21:28.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d023') } 2015-04-01T16:21:28.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d024') } 2015-04-01T16:21:28.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d025') } 2015-04-01T16:21:28.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d026') } 2015-04-01T16:21:28.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d027') } 2015-04-01T16:21:28.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d028') } 2015-04-01T16:21:28.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d029') } 2015-04-01T16:21:28.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d02a') } 2015-04-01T16:21:28.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d02b') } 2015-04-01T16:21:28.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d02c') } 2015-04-01T16:21:28.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d02d') } 2015-04-01T16:21:28.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d02e') } 2015-04-01T16:21:28.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d02f') } 2015-04-01T16:21:28.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d030') } 2015-04-01T16:21:28.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d031') } 2015-04-01T16:21:28.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d032') } 2015-04-01T16:21:28.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d033') } 2015-04-01T16:21:28.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d034') } 2015-04-01T16:21:28.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d035') } 2015-04-01T16:21:28.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d036') } 2015-04-01T16:21:28.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d037') } 2015-04-01T16:21:28.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d038') } 2015-04-01T16:21:28.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d039') } 2015-04-01T16:21:28.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d03a') } 2015-04-01T16:21:28.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d03b') } 2015-04-01T16:21:28.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d03c') } 2015-04-01T16:21:28.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d03d') } 2015-04-01T16:21:28.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d03e') } 2015-04-01T16:21:28.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d03f') } 2015-04-01T16:21:28.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d040') } 2015-04-01T16:21:28.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d041') } 2015-04-01T16:21:28.909+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.910+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|309, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.912+0000 D REPL [rsSync] replication batch size is 33 2015-04-01T16:21:28.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d042') } 2015-04-01T16:21:28.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d043') } 2015-04-01T16:21:28.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d044') } 2015-04-01T16:21:28.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d045') } 2015-04-01T16:21:28.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d046') } 2015-04-01T16:21:28.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d047') } 2015-04-01T16:21:28.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d048') } 2015-04-01T16:21:28.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d049') } 2015-04-01T16:21:28.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d04a') } 2015-04-01T16:21:28.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d04b') } 2015-04-01T16:21:28.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d04c') } 2015-04-01T16:21:28.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d04d') } 2015-04-01T16:21:28.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d04e') } 2015-04-01T16:21:28.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d04f') } 2015-04-01T16:21:28.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d050') } 2015-04-01T16:21:28.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d051') } 2015-04-01T16:21:28.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d052') } 2015-04-01T16:21:28.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d053') } 2015-04-01T16:21:28.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d054') } 2015-04-01T16:21:28.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d055') } 2015-04-01T16:21:28.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d056') } 2015-04-01T16:21:28.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d057') } 2015-04-01T16:21:28.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d058') } 2015-04-01T16:21:28.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d059') } 2015-04-01T16:21:28.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d05a') } 2015-04-01T16:21:28.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d05b') } 2015-04-01T16:21:28.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d05c') } 2015-04-01T16:21:28.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08b5355f778169d05d') } 2015-04-01T16:21:28.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:28.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:28.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:28.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:28.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:28.917+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|342, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.928+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.929+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.929+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:28.929+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:28.929+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|344, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.931+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.932+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:28.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:28.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:28.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:28.933+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|347, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.937+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.938+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08e15b5605d452bec5') } 2015-04-01T16:21:28.938+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|348, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.940+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.940+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08e15b5605d452bec6') } 2015-04-01T16:21:28.941+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|349, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.943+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:28.943+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.944+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.945+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08e15b5605d452bec7') } 2015-04-01T16:21:28.945+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08e15b5605d452bec8') } 2015-04-01T16:21:28.945+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|351, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.946+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.947+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.948+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08e15b5605d452bec9') } 2015-04-01T16:21:28.948+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08e15b5605d452beca') } 2015-04-01T16:21:28.948+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|353, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.949+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.951+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.951+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08e15b5605d452becb') } 2015-04-01T16:21:28.951+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|354, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.953+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.953+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:28.953+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08e15b5605d452becc') } 2015-04-01T16:21:28.953+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08e15b5605d452becd') } 2015-04-01T16:21:28.954+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|356, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.956+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.956+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.956+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b08e15b5605d452bece') } 2015-04-01T16:21:28.956+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|357, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.983+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.983+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.984+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:28.984+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:28.984+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:28.984+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:28.984+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.984+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:28.984+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|358, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.989+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.990+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.990+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:28.990+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:28.991+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:28.991+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.991+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:28.992+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:28.992+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.992+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.992+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|359, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.993+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.993+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:28.993+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|360, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:28.997+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:28.997+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:28.997+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:28.997+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:28.997+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:28.998+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:28.998+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:28.998+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:28.998+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|361, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.000+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.000+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.001+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.001+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.001+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.001+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.001+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.001+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.001+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.001+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|362, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.002+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.004+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.004+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:29.005+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905288000|363, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.010+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.010+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.010+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.010+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.010+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.010+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.010+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.010+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.011+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:29.011+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.011+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.012+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.013+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.013+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.013+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.013+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.013+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.014+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.014+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.016+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.016+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.017+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:29.017+0000 D COMMAND [conn15] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:29.017+0000 D COMMAND [conn15] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:29.017+0000 I COMMAND [conn15] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:29.018+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452becf') } 2015-04-01T16:21:29.018+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bed0') } 2015-04-01T16:21:29.018+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bed1') } 2015-04-01T16:21:29.018+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.057+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.057+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.057+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.058+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.058+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.058+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.058+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.058+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.058+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.060+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.060+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.060+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.060+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.060+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.060+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.060+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.061+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.061+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.061+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.061+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.064+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:29.064+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bed2') } 2015-04-01T16:21:29.065+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bed3') } 2015-04-01T16:21:29.065+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.066+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.066+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.066+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bed4') } 2015-04-01T16:21:29.066+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.068+0000 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:29.068+0000 D COMMAND [conn14] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:29.068+0000 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:29.071+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.072+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.072+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.072+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.072+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.072+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.072+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.072+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.072+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.088+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.090+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.090+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.090+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.090+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.090+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.090+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.091+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.091+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.091+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.091+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.092+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:29.092+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.106+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.106+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.107+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.107+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.107+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.107+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.107+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.107+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.107+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|14, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.111+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.111+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.111+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.111+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.111+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.111+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.111+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.112+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.112+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.112+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.112+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.113+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.113+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:29.113+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.122+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:29.123+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.123+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.123+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.124+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.124+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.124+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.124+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.124+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.124+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.139+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.141+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.141+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.141+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.142+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.142+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.142+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.142+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.142+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.143+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.143+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.143+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.143+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:29.144+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.150+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.151+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.151+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.151+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.151+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.151+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.151+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.151+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.152+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|20, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.153+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.155+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.156+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.156+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.156+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.156+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.156+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.157+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.157+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.157+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.157+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.157+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:29.158+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.166+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.166+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.166+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.166+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.166+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.166+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.166+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.166+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.166+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|23, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.182+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.182+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:29.184+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.185+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:29.185+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:31.185Z 2015-04-01T16:21:29.185+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.185+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.186+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.186+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.186+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.186+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.186+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.186+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|24, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.187+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.187+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.187+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:29.187+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|25, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.198+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.199+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.199+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.199+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.199+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.199+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.199+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.199+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.200+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|26, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.201+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.204+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.204+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.204+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.204+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.204+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.204+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.204+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.204+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.205+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|27, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.206+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.206+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.206+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:29.206+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|28, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.214+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.214+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.214+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.214+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.214+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.214+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.214+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.214+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.215+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|29, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.230+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.230+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.230+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.230+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.230+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.230+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.231+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.231+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.231+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.231+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|30, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.234+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.234+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.234+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.234+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:6a5000 2015-04-01T16:21:29.234+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { Value: 1, SubValues.Value: 1 }, name: "Value_1_SubValues.Value_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.234+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:29.234+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.235+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Value_1_SubValues.Value_1 2015-04-01T16:21:29.235+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:29.235+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:29.235+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.235+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.235+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|31, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.249+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:29.249+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.250+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.250+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bed5') } 2015-04-01T16:21:29.251+0000 D INDEX [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - index { Value: 1, SubValues.Value: 1 } set to multi key. 2015-04-01T16:21:29.251+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|32, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.254+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.256+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:29.256+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bed6') } 2015-04-01T16:21:29.256+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bed7') } 2015-04-01T16:21:29.256+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|34, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.267+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.268+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:29.268+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bed5') } 2015-04-01T16:21:29.268+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bed6') } 2015-04-01T16:21:29.268+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bed7') } 2015-04-01T16:21:29.269+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|37, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.274+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.274+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.274+0000 D QUERY [repl writer worker 15] Using idhack: { _id: -1 } 2015-04-01T16:21:29.275+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|38, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.282+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.282+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.283+0000 D QUERY [repl writer worker 15] Using idhack: { _id: -1 } 2015-04-01T16:21:29.283+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|39, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.285+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.287+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: -1 } 2015-04-01T16:21:29.287+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|40, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.302+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.302+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.303+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:29.303+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:29.303+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { Value: 1, SubValues.Value: 1 }, name: "Value_1_SubValues.Value_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.303+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.303+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|41, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.307+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.307+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.308+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.308+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6a5000 2015-04-01T16:21:29.308+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, unique: true, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.308+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:29.308+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.308+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:29.308+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:29.308+0000 I INDEX [repl writer worker 15] build index done. scanned 1 total records. 0 secs 2015-04-01T16:21:29.309+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.309+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.309+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|42, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.310+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.312+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.312+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.312+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:6c5000 2015-04-01T16:21:29.312+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { y: 1 }, name: "y_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.312+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:29.312+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.312+0000 D INDEX [repl writer worker 15] bulk commit starting for index: y_1 2015-04-01T16:21:29.312+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:29.313+0000 I INDEX [repl writer worker 15] build index done. scanned 1 total records. 0 secs 2015-04-01T16:21:29.313+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.313+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.313+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|43, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.317+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.317+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.318+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:29.318+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:29.319+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, unique: true, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.319+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.319+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { y: 1 }, name: "y_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.319+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.320+0000 D REPL [rsBackgroundSync] bgsync buffer has 300 bytes 2015-04-01T16:21:29.321+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|44, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.321+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.321+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.322+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.322+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6c5000 2015-04-01T16:21:29.322+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { x: 1, y: 1 }, name: "x_1_y_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.322+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:29.322+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.322+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1_y_1 2015-04-01T16:21:29.322+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:29.322+0000 I INDEX [repl writer worker 15] build index done. scanned 1 total records. 0 secs 2015-04-01T16:21:29.322+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.322+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.322+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|45, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.322+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.324+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.324+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "x_1_y_1" } 2015-04-01T16:21:29.324+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:29.324+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.324+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.324+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|46, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.325+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.325+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.326+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6c5000 2015-04-01T16:21:29.326+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { x: 1, y: 1 }, name: "x_1_y_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.326+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:29.326+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.326+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1_y_1 2015-04-01T16:21:29.326+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:29.327+0000 I INDEX [repl writer worker 15] build index done. scanned 1 total records. 0 secs 2015-04-01T16:21:29.327+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.327+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.327+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.327+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|47, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.328+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.330+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "x_1_y_1" } 2015-04-01T16:21:29.330+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:29.330+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.330+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|48, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.395+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:29.395+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:29.396+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:31.395Z 2015-04-01T16:21:29.502+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.503+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.503+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.503+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.503+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.503+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.503+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.503+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.504+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|49, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.523+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.525+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.525+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.525+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.525+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.525+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.525+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.525+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.525+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.526+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|50, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.527+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.527+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:29.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:29.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:29.528+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|52, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.561+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.562+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.562+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.562+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.562+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.562+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.562+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.562+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.563+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|53, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.564+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.564+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.564+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.565+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.565+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.565+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.565+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.565+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.565+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.567+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.567+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:29.567+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|54, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.568+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:29.568+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:29.568+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|56, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.575+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.576+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.576+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.576+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.576+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.576+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.576+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.576+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.577+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|57, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.578+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.579+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.579+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.579+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.579+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.579+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.579+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.579+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.579+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.580+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|58, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.580+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.581+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:29.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:29.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:29.581+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|60, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.589+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.589+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.589+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.589+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.590+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.590+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.590+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.590+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.590+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|61, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.592+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:29.592+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.592+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.593+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.593+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.593+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.593+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.593+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.593+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.593+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.594+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|62, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.594+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.595+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:29.595+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:29.595+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:29.595+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|64, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.611+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.611+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.612+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.612+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.612+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.612+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.612+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.612+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.612+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|65, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.617+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.618+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.618+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.619+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.619+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.619+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.619+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.619+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.619+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.619+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|66, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.619+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.620+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.621+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bed9') } 2015-04-01T16:21:29.621+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|67, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.625+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.625+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.625+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bed9') } 2015-04-01T16:21:29.625+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|68, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.632+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.632+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.632+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452beda') } 2015-04-01T16:21:29.632+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|69, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.635+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.636+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.636+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bedb') } 2015-04-01T16:21:29.636+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|70, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.813+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.815+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.815+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.815+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.815+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.815+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.815+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.815+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.815+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|71, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.827+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.828+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.828+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.828+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.828+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.828+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.829+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.829+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.829+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.830+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|72, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.830+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.830+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.830+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bedd') } 2015-04-01T16:21:29.831+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|73, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.832+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.833+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.833+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bede') } 2015-04-01T16:21:29.834+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|74, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.835+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.836+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.836+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bedf') } 2015-04-01T16:21:29.836+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|75, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.842+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.843+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.843+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bee0') } 2015-04-01T16:21:29.844+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|76, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.910+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:29.911+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.912+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.912+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.912+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.912+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.912+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.912+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.912+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.912+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|77, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.922+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.923+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.923+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.923+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.924+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.924+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.924+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.924+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.924+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.924+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.924+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.924+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|78, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bee1') } 2015-04-01T16:21:29.926+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|79, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.927+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.928+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bee2') } 2015-04-01T16:21:29.928+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|80, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.929+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.929+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:29.930+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bee3') } 2015-04-01T16:21:29.930+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bee4') } 2015-04-01T16:21:29.931+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|82, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.932+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.932+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:29.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bee5') } 2015-04-01T16:21:29.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bee6') } 2015-04-01T16:21:29.934+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|84, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.983+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.983+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.983+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:29.983+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:29.983+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:29.983+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:29.983+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.983+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:29.984+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|85, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.988+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.989+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.989+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:29.989+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:29.989+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:29.989+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.989+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:29.989+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:29.989+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:29.989+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|86, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.990+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.990+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.990+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bee7') } 2015-04-01T16:21:29.992+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|87, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.996+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.997+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:29.997+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bee8') } 2015-04-01T16:21:29.998+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|88, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:29.999+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:29.999+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:30.000+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b09e15b5605d452bee9') } 2015-04-01T16:21:30.000+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905289000|89, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:30.069+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:30.069+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:30.070+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:30.070+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:30.070+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:30.070+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:30.070+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:30.070+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:30.070+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905290000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:30.074+0000 D REPL [rsBackgroundSync] bgsync buffer has 107 bytes 2015-04-01T16:21:30.076+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:30.076+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:30.076+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:30.076+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:30.076+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:30.076+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:30.076+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:30.076+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:30.076+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:30.077+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905290000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:30.077+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:30.077+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:30.077+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ae15b5605d452beea') } 2015-04-01T16:21:30.079+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:30.079+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905290000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:30.079+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:30.079+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ae15b5605d452beeb') } 2015-04-01T16:21:30.082+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905290000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:30.082+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:30.082+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:30.082+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ae15b5605d452beec') } 2015-04-01T16:21:30.082+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905290000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:30.158+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:30.158+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:30.158+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:30.158+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:30.159+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:30.159+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:30.159+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:30.159+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:30.159+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905290000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:30.173+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:30.173+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:30.174+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:30.174+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:30.174+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:30.174+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:30.174+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:30.174+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:30.174+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:30.174+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905290000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:30.174+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:30.175+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:30.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ae15b5605d452beee') } 2015-04-01T16:21:30.177+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905290000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:30.177+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:30.177+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:30.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ae15b5605d452beed') } 2015-04-01T16:21:30.178+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905290000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:30.187+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:30.188+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:30.188+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ae15b5605d452beef') } 2015-04-01T16:21:30.188+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905290000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:30.190+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:30.190+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:30.191+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ae15b5605d452bef1') } 2015-04-01T16:21:30.191+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ae15b5605d452bef0') } 2015-04-01T16:21:30.191+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905290000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:30.555+0000 D COMMAND [conn16] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:21:30.556+0000 I COMMAND [conn16] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:21:30.556+0000 D COMMAND [conn16] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:21:30.556+0000 I COMMAND [conn16] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:21:31.018+0000 D COMMAND [conn15] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:31.018+0000 D COMMAND [conn15] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:31.018+0000 I COMMAND [conn15] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:31.069+0000 D COMMAND [conn14] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:31.069+0000 D COMMAND [conn14] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:31.069+0000 I COMMAND [conn14] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:31.186+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:31.186+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:31.186+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:33.186Z 2015-04-01T16:21:31.395+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:31.395+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:31.395+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:33.395Z 2015-04-01T16:21:31.591+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.592+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:31.592+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "temp" } 2015-04-01T16:21:31.593+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.temp {} 2015-04-01T16:21:31.593+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:6e5000 2015-04-01T16:21:31.593+0000 D STORAGE [repl writer worker 15] Tests04011621.temp: clearing plan cache - collection info cache reset 2015-04-01T16:21:31.593+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:31.593+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6c5000 2015-04-01T16:21:31.593+0000 D STORAGE [repl writer worker 15] Tests04011621.temp: clearing plan cache - collection info cache reset 2015-04-01T16:21:31.595+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.595+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.596+0000 D REPL [rsBackgroundSync] bgsync buffer has 146 bytes 2015-04-01T16:21:31.596+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:31.596+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bef2') } 2015-04-01T16:21:31.597+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bef3') } 2015-04-01T16:21:31.597+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bef4') } 2015-04-01T16:21:31.597+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.597+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.598+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:31.598+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bef5') } 2015-04-01T16:21:31.598+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bef6') } 2015-04-01T16:21:31.599+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.599+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.600+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:31.600+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bef7') } 2015-04-01T16:21:31.600+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bef8') } 2015-04-01T16:21:31.600+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.637+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.637+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:31.637+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "temp" } 2015-04-01T16:21:31.638+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.temp 2015-04-01T16:21:31.638+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.temp 2015-04-01T16:21:31.638+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.temp" } 2015-04-01T16:21:31.638+0000 D STORAGE [repl writer worker 15] Tests04011621.temp: clearing plan cache - collection info cache reset 2015-04-01T16:21:31.638+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:31.638+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.640+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.640+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:31.640+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "temp" } 2015-04-01T16:21:31.640+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.temp {} 2015-04-01T16:21:31.640+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:31.640+0000 D STORAGE [repl writer worker 15] Tests04011621.temp: clearing plan cache - collection info cache reset 2015-04-01T16:21:31.640+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:31.641+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6c5000 2015-04-01T16:21:31.641+0000 D STORAGE [repl writer worker 15] Tests04011621.temp: clearing plan cache - collection info cache reset 2015-04-01T16:21:31.641+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.641+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.642+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:31.642+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bef9') } 2015-04-01T16:21:31.642+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452befa') } 2015-04-01T16:21:31.642+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452befb') } 2015-04-01T16:21:31.644+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.644+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.645+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:31.645+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452befc') } 2015-04-01T16:21:31.645+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452befd') } 2015-04-01T16:21:31.645+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452befe') } 2015-04-01T16:21:31.646+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.647+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.648+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:31.648+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452beff') } 2015-04-01T16:21:31.649+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.673+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.673+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:31.673+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "temp" } 2015-04-01T16:21:31.674+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.temp 2015-04-01T16:21:31.674+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.temp 2015-04-01T16:21:31.674+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.temp" } 2015-04-01T16:21:31.674+0000 D STORAGE [repl writer worker 15] Tests04011621.temp: clearing plan cache - collection info cache reset 2015-04-01T16:21:31.674+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:31.674+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.675+0000 D REPL [rsBackgroundSync] bgsync buffer has 245 bytes 2015-04-01T16:21:31.675+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.676+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:31.676+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "temp" } 2015-04-01T16:21:31.676+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.temp {} 2015-04-01T16:21:31.676+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:31.676+0000 D STORAGE [repl writer worker 15] Tests04011621.temp: clearing plan cache - collection info cache reset 2015-04-01T16:21:31.676+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:31.676+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6c5000 2015-04-01T16:21:31.676+0000 D STORAGE [repl writer worker 15] Tests04011621.temp: clearing plan cache - collection info cache reset 2015-04-01T16:21:31.677+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.677+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.680+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:31.680+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bf00') } 2015-04-01T16:21:31.680+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bf01') } 2015-04-01T16:21:31.680+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bf02') } 2015-04-01T16:21:31.681+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.681+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.682+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:31.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bf03') } 2015-04-01T16:21:31.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bf04') } 2015-04-01T16:21:31.682+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.682+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|24, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.682+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:31.683+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bf05') } 2015-04-01T16:21:31.683+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|25, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.731+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.731+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:31.732+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "temp" } 2015-04-01T16:21:31.732+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.temp 2015-04-01T16:21:31.732+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.temp 2015-04-01T16:21:31.732+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.temp" } 2015-04-01T16:21:31.732+0000 D STORAGE [repl writer worker 15] Tests04011621.temp: clearing plan cache - collection info cache reset 2015-04-01T16:21:31.732+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:31.732+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|26, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.734+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.734+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:31.735+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "temp" } 2015-04-01T16:21:31.735+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.temp {} 2015-04-01T16:21:31.736+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:31.736+0000 D STORAGE [repl writer worker 15] Tests04011621.temp: clearing plan cache - collection info cache reset 2015-04-01T16:21:31.736+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:31.736+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6c5000 2015-04-01T16:21:31.736+0000 D STORAGE [repl writer worker 15] Tests04011621.temp: clearing plan cache - collection info cache reset 2015-04-01T16:21:31.736+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:31.737+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|27, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:31.737+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:31.737+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bf06') } 2015-04-01T16:21:31.737+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bf07') } 2015-04-01T16:21:31.737+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bf08') } 2015-04-01T16:21:31.738+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0be15b5605d452bf09') } 2015-04-01T16:21:31.738+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905291000|31, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.332+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.333+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.333+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:32.333+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:32.334+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:32.334+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.334+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.334+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.334+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.335+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.335+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.336+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:32.336+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:32.336+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:32.336+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.336+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.336+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:32.336+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.336+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.337+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.337+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.338+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.338+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6a5000 2015-04-01T16:21:32.338+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { a: 1, b: 1 }, name: "i", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.338+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:32.338+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.338+0000 D INDEX [repl writer worker 15] bulk commit starting for index: i 2015-04-01T16:21:32.338+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:32.338+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:32.338+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.338+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.341+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.342+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.343+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.343+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf0a') } 2015-04-01T16:21:32.343+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.346+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:32.346+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.346+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:32.346+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf0b') } 2015-04-01T16:21:32.347+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf0c') } 2015-04-01T16:21:32.347+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf0d') } 2015-04-01T16:21:32.347+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.512+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.514+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:32.517+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf0a') } 2015-04-01T16:21:32.517+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf0b') } 2015-04-01T16:21:32.517+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf0c') } 2015-04-01T16:21:32.517+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf0d') } 2015-04-01T16:21:32.519+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.519+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.522+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.522+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:32.522+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:32.522+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { a: 1, b: 1 }, name: "i", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.522+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.523+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.523+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.523+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.524+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf0e') } 2015-04-01T16:21:32.524+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.525+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.526+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.526+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf0f') } 2015-04-01T16:21:32.526+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|14, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.528+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.528+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:32.529+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf10') } 2015-04-01T16:21:32.529+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf11') } 2015-04-01T16:21:32.529+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.560+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.561+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.562+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf0e') } 2015-04-01T16:21:32.562+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.563+0000 D REPL [rsBackgroundSync] bgsync buffer has 222 bytes 2015-04-01T16:21:32.563+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.563+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:32.564+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf0f') } 2015-04-01T16:21:32.564+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf10') } 2015-04-01T16:21:32.564+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf11') } 2015-04-01T16:21:32.564+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|20, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.564+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.564+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.564+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:32.565+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:32.565+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.573+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.574+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.574+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:32.574+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:32.574+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.575+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.578+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:32.578+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf12') } 2015-04-01T16:21:32.578+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf13') } 2015-04-01T16:21:32.578+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf14') } 2015-04-01T16:21:32.578+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|25, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.579+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.579+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.580+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf15') } 2015-04-01T16:21:32.580+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|26, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.616+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.619+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.619+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:32.619+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:32.619+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:32.619+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.619+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.619+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.620+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.620+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.620+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|27, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.621+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:32.621+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:32.621+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:32.621+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.621+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.621+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:32.622+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.622+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.622+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.622+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|28, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.623+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf16') } 2015-04-01T16:21:32.624+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|29, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.631+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.632+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.632+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf16') } 2015-04-01T16:21:32.633+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|30, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.634+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.636+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.636+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:32.636+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:32.636+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.636+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.637+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|31, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.638+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf17') } 2015-04-01T16:21:32.639+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|32, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.646+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.647+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.647+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf17') } 2015-04-01T16:21:32.647+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|33, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.649+0000 D REPL [rsBackgroundSync] bgsync buffer has 125 bytes 2015-04-01T16:21:32.649+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.649+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.649+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:32.649+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:32.650+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|34, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.650+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.652+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:32.652+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf18') } 2015-04-01T16:21:32.653+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf19') } 2015-04-01T16:21:32.653+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|36, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.654+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.654+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:32.654+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf1a') } 2015-04-01T16:21:32.654+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf1b') } 2015-04-01T16:21:32.654+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|38, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.659+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.659+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.659+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "tmp.agg_out.2", temp: true } 2015-04-01T16:21:32.659+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.tmp.agg_out.2 { temp: true } 2015-04-01T16:21:32.660+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 0:6e7000 2015-04-01T16:21:32.660+0000 D STORAGE [repl writer worker 15] Tests04011621.tmp.agg_out.2: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.660+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.660+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6a5000 2015-04-01T16:21:32.660+0000 D STORAGE [repl writer worker 15] Tests04011621.tmp.agg_out.2: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.660+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|39, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.663+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.664+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:32.664+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:32.664+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:32.665+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:32.665+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|42, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.665+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.666+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.667+0000 D COMMAND [repl writer worker 15] run command admin.$cmd { renameCollection: "Tests04011621.tmp.agg_out.2", to: "Tests04011621.temp", dropTarget: true } 2015-04-01T16:21:32.667+0000 D COMMAND [repl writer worker 15] command: { renameCollection: "Tests04011621.tmp.agg_out.2", to: "Tests04011621.temp", dropTarget: true } 2015-04-01T16:21:32.667+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.temp 2015-04-01T16:21:32.667+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.temp" } 2015-04-01T16:21:32.667+0000 D STORAGE [repl writer worker 15] Tests04011621.temp: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.667+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.669+0000 D STORAGE [repl writer worker 15] Tests04011621.temp: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.669+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|43, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.669+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.670+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.670+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:32.670+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:32.670+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:32.670+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.670+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.670+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.672+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|44, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.672+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.674+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.674+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:32.674+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:32.674+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:32.674+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.674+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.674+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:32.674+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.675+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|45, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.675+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.676+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:32.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf1c') } 2015-04-01T16:21:32.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf1d') } 2015-04-01T16:21:32.677+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf1e') } 2015-04-01T16:21:32.677+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|48, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.716+0000 D REPL [rsBackgroundSync] bgsync buffer has 111 bytes 2015-04-01T16:21:32.718+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.723+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:32.723+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf1c') } 2015-04-01T16:21:32.723+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf1e') } 2015-04-01T16:21:32.724+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|50, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.741+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.742+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.742+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:32.742+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:32.742+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:32.742+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.742+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.742+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.743+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|51, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.746+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.748+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.749+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:32.749+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:32.749+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:32.749+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.749+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.750+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:32.750+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.750+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|52, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.751+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.751+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:32.751+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf1f') } 2015-04-01T16:21:32.751+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf20') } 2015-04-01T16:21:32.751+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf21') } 2015-04-01T16:21:32.752+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|55, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.755+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.755+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.755+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:32.755+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:32.755+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:32.755+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.755+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.755+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.756+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|56, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.759+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.759+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.759+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:32.759+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:32.759+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:32.759+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.759+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.760+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:32.760+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.760+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|57, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.761+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.763+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:32.763+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf22') } 2015-04-01T16:21:32.763+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf23') } 2015-04-01T16:21:32.763+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf24') } 2015-04-01T16:21:32.764+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|60, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.767+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.767+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.767+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf22') } 2015-04-01T16:21:32.768+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|61, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.770+0000 D REPL [rsBackgroundSync] bgsync buffer has 402 bytes 2015-04-01T16:21:32.770+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.771+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:32.771+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf23') } 2015-04-01T16:21:32.771+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf24') } 2015-04-01T16:21:32.771+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf24') } 2015-04-01T16:21:32.771+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0cb5355f778169d05e') } 2015-04-01T16:21:32.772+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|65, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.782+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.783+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.783+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:32.784+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:32.784+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:32.784+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.784+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.784+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.784+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|66, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.786+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.786+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.787+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:32.787+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:32.787+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:32.787+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.787+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.787+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:32.787+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.788+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|67, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.788+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.789+0000 D REPL [rsSync] replication batch size is 7 2015-04-01T16:21:32.791+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf25') } 2015-04-01T16:21:32.791+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf26') } 2015-04-01T16:21:32.791+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf27') } 2015-04-01T16:21:32.791+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf28') } 2015-04-01T16:21:32.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf27') } 2015-04-01T16:21:32.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf28') } 2015-04-01T16:21:32.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf27') } 2015-04-01T16:21:32.792+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|74, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.793+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.793+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.794+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf28') } 2015-04-01T16:21:32.794+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|75, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.801+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.802+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.802+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:32.802+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:32.802+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:32.803+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.803+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.803+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.803+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|76, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.804+0000 D REPL [rsBackgroundSync] bgsync buffer has 355 bytes 2015-04-01T16:21:32.806+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.806+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.806+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:32.806+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:32.807+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:32.807+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.807+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.807+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:32.808+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.808+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|77, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.809+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.809+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:32.809+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf29') } 2015-04-01T16:21:32.810+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf29') } 2015-04-01T16:21:32.810+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf29') } 2015-04-01T16:21:32.810+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|80, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.813+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.815+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.815+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:32.815+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:32.815+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:32.815+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.815+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.815+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.815+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|81, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.816+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.817+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.817+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:32.817+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:32.817+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:32.817+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.818+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.818+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:32.818+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.820+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|82, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.820+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.821+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:32.822+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf2a') } 2015-04-01T16:21:32.822+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf2a') } 2015-04-01T16:21:32.822+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.823+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|84, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.823+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.823+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:32.823+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:32.823+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:32.823+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.824+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.824+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.824+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|85, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.826+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.826+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.826+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:32.826+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:32.827+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:32.827+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.827+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.827+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:32.827+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.827+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.827+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:32.828+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|86, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.829+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0cb5355f778169d05f') } 2015-04-01T16:21:32.829+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0cb5355f778169d05f') } 2015-04-01T16:21:32.829+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|88, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.829+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.830+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:32.831+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0cb5355f778169d060') } 2015-04-01T16:21:32.831+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0cb5355f778169d060') } 2015-04-01T16:21:32.831+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|90, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.832+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.832+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.832+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0cb5355f778169d061') } 2015-04-01T16:21:32.833+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|91, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.838+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.839+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.839+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:32.839+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:32.839+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:32.839+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.839+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.839+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.840+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|92, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.846+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.847+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.847+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:32.847+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:32.847+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:32.847+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.847+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.848+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:32.848+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.848+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|93, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.849+0000 D REPL [rsBackgroundSync] bgsync buffer has 121 bytes 2015-04-01T16:21:32.850+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.851+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:32.851+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0cb5355f778169d062') } 2015-04-01T16:21:32.851+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0cb5355f778169d062') } 2015-04-01T16:21:32.852+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|95, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.859+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.859+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.859+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf2b') } 2015-04-01T16:21:32.860+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|96, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.865+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.866+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.866+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf2b') } 2015-04-01T16:21:32.866+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|97, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.867+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.868+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:32.868+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf2c') } 2015-04-01T16:21:32.868+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf2d') } 2015-04-01T16:21:32.868+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|99, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.871+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.872+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.872+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.872+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6c5000 2015-04-01T16:21:32.872+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.872+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:32.872+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.872+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:32.872+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:32.873+0000 I INDEX [repl writer worker 15] build index done. scanned 2 total records. 0 secs 2015-04-01T16:21:32.873+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.873+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.873+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|100, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.877+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.878+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:32.878+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf2c') } 2015-04-01T16:21:32.878+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf2d') } 2015-04-01T16:21:32.879+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf2e') } 2015-04-01T16:21:32.879+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf2f') } 2015-04-01T16:21:32.880+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|104, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.883+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.883+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.884+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf2e') } 2015-04-01T16:21:32.884+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|105, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.886+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.887+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:32.888+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf2f') } 2015-04-01T16:21:32.888+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf30') } 2015-04-01T16:21:32.888+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf31') } 2015-04-01T16:21:32.888+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|108, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.905+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.906+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf30') } 2015-04-01T16:21:32.906+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|109, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.908+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:32.908+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.909+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:32.909+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf31') } 2015-04-01T16:21:32.909+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf32') } 2015-04-01T16:21:32.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf33') } 2015-04-01T16:21:32.910+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|112, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.915+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.915+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.915+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:32.915+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:32.915+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:32.915+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.916+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.916+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.916+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.916+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.916+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|113, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.921+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.921+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.921+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:32.921+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:32.921+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:32.921+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.921+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.921+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6c5000 2015-04-01T16:21:32.922+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.922+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|114, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.922+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.922+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf34') } 2015-04-01T16:21:32.922+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|115, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.924+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.924+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf35') } 2015-04-01T16:21:32.924+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|116, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.937+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.937+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:32.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf34') } 2015-04-01T16:21:32.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ce15b5605d452bf35') } 2015-04-01T16:21:32.938+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|118, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.943+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.943+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.943+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:32.943+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:32.943+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:32.943+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.944+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.944+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.944+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|119, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.957+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.957+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.957+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:32.957+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:32.957+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:32.957+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.957+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.958+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6c5000 2015-04-01T16:21:32.958+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.958+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|120, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.962+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.962+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.963+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:32.963+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:32.963+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:32.963+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:32.963+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.963+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.964+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|121, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.970+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.970+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.970+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "cappedcollection", autoIndexId: false } 2015-04-01T16:21:32.970+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.cappedcollection { autoIndexId: false } 2015-04-01T16:21:32.971+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:32.971+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.971+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|122, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.973+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.974+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.974+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "cappedcollection" } 2015-04-01T16:21:32.974+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.cappedcollection 2015-04-01T16:21:32.974+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.cappedcollection 2015-04-01T16:21:32.974+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.974+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|123, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.976+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.977+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.977+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "cappedcollection", autoIndexId: true } 2015-04-01T16:21:32.977+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.cappedcollection { autoIndexId: true } 2015-04-01T16:21:32.977+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:32.977+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.978+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.978+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6c5000 2015-04-01T16:21:32.978+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.979+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|124, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.983+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:32.983+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.983+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.984+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "cappedcollection" } 2015-04-01T16:21:32.984+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.cappedcollection 2015-04-01T16:21:32.984+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.cappedcollection 2015-04-01T16:21:32.984+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.cappedcollection" } 2015-04-01T16:21:32.984+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.984+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.984+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|125, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.989+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.989+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.990+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "cappedcollection", capped: true, size: 10000, max: 1000 } 2015-04-01T16:21:32.990+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.cappedcollection { capped: true, size: 10000, max: 1000 } 2015-04-01T16:21:32.990+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:10240 fromFreeList: 0 eloc: 0:6e9000 2015-04-01T16:21:32.990+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.991+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:32.991+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6c5000 2015-04-01T16:21:32.991+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.991+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|126, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.993+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.994+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.994+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "cappedcollection" } 2015-04-01T16:21:32.994+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.cappedcollection 2015-04-01T16:21:32.994+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.cappedcollection 2015-04-01T16:21:32.994+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.cappedcollection" } 2015-04-01T16:21:32.994+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:32.994+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:32.995+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|127, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:32.999+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:32.999+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:32.999+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "cappedcollection", capped: true, size: 10000 } 2015-04-01T16:21:32.999+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.cappedcollection { capped: true, size: 10000 } 2015-04-01T16:21:33.000+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:10240 fromFreeList: 0 eloc: 0:6ec000 2015-04-01T16:21:33.000+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.000+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.000+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6c5000 2015-04-01T16:21:33.001+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.002+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905292000|128, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.002+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.002+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.003+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "cappedcollection" } 2015-04-01T16:21:33.003+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.cappedcollection 2015-04-01T16:21:33.003+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.cappedcollection 2015-04-01T16:21:33.003+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.cappedcollection" } 2015-04-01T16:21:33.003+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.003+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.003+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.011+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.011+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.011+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "cappedcollection", flags: 0 } 2015-04-01T16:21:33.011+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.cappedcollection { flags: 0 } 2015-04-01T16:21:33.012+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:33.012+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.012+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.013+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6c5000 2015-04-01T16:21:33.013+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.013+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.017+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.017+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.018+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "cappedcollection" } 2015-04-01T16:21:33.018+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.cappedcollection 2015-04-01T16:21:33.018+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.cappedcollection 2015-04-01T16:21:33.018+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.cappedcollection" } 2015-04-01T16:21:33.018+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.018+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.019+0000 D COMMAND [conn15] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:33.019+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.019+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.020+0000 D COMMAND [conn15] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:33.020+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.020+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "cappedcollection", flags: 1 } 2015-04-01T16:21:33.020+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.cappedcollection { flags: 1 } 2015-04-01T16:21:33.021+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:33.021+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.021+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.021+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6c5000 2015-04-01T16:21:33.021+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.021+0000 I COMMAND [conn15] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 1ms 2015-04-01T16:21:33.022+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.031+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.032+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.032+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.032+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.032+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.032+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.032+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.033+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.033+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.033+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.034+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.034+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.035+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf36') } 2015-04-01T16:21:33.035+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.035+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.036+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.036+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:33.036+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:33.036+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.040+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.041+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.041+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:33.041+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:33.042+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.043+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.043+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.044+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.044+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 0:6ef000 2015-04-01T16:21:33.045+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.045+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.045+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.045+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:33.045+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.045+0000 I INDEX [repl writer worker 15] build index done. scanned 1 total records. 0 secs 2015-04-01T16:21:33.045+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.045+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.046+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.050+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.051+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.051+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:33.051+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:33.051+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.051+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.052+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.053+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.053+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.054+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.054+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.054+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "Tests04011621.testcollection", expireAfterSeconds: 3600 } 2015-04-01T16:21:33.054+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.054+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.054+0000 D INDEX [repl writer worker 15] bulk commit starting for index: ts_1 2015-04-01T16:21:33.054+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.054+0000 I INDEX [repl writer worker 15] build index done. scanned 1 total records. 0 secs 2015-04-01T16:21:33.054+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.054+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.054+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.067+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:33.068+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.068+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.068+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf36') } 2015-04-01T16:21:33.069+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.070+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.070+0000 D NETWORK [conn14] SocketException: remote: 127.0.0.1:62975 error: 9001 socket exception [CLOSED] server [127.0.0.1:62975] 2015-04-01T16:21:33.071+0000 I NETWORK [conn14] end connection 127.0.0.1:62975 (3 connections now open) 2015-04-01T16:21:33.071+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.072+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:33.072+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:33.072+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "Tests04011621.testcollection", expireAfterSeconds: 3600 } 2015-04-01T16:21:33.072+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.072+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.072+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.074+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.074+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf37') } 2015-04-01T16:21:33.075+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|14, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.075+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.076+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.078+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf38') } 2015-04-01T16:21:33.078+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf39') } 2015-04-01T16:21:33.079+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62992 #18 (4 connections now open) 2015-04-01T16:21:33.079+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.079+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.080+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.080+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf3a') } 2015-04-01T16:21:33.080+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.084+0000 D QUERY [conn18] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.084+0000 D COMMAND [conn18] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4A2B35656A64535363484F6E54717A2B6C6C46506668594F564D614274786B72) } 2015-04-01T16:21:33.085+0000 I COMMAND [conn18] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4A2B35656A64535363484F6E54717A2B6C6C46506668594F564D614274786B72) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:21:33.115+0000 D COMMAND [conn18] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D4A2B35656A64535363484F6E54717A2B6C6C46506668594F564D614274786B725174786F51514D624C48684A54713433484935734A5A6F5A4F6E337970...), conversationId: 1 } 2015-04-01T16:21:33.115+0000 I COMMAND [conn18] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4A2B35656A64535363484F6E54717A2B6C6C46506668594F564D614274786B725174786F51514D624C48684A54713433484935734A5A6F5A4F6E337970...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:21:33.115+0000 D COMMAND [conn18] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:21:33.115+0000 I ACCESS [conn18] Successfully authenticated as principal __system on local 2015-04-01T16:21:33.115+0000 I COMMAND [conn18] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:21:33.116+0000 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:33.116+0000 D COMMAND [conn18] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:33.116+0000 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:33.121+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.122+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:33.122+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf37') } 2015-04-01T16:21:33.122+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf38') } 2015-04-01T16:21:33.122+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf39') } 2015-04-01T16:21:33.122+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf3a') } 2015-04-01T16:21:33.123+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.123+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.123+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.124+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:33.124+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:33.124+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.124+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.125+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:33.125+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf3b') } 2015-04-01T16:21:33.125+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf3c') } 2015-04-01T16:21:33.125+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf3d') } 2015-04-01T16:21:33.125+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf3e') } 2015-04-01T16:21:33.125+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|26, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.130+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:33.131+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.131+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.131+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.131+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.131+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.131+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.131+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.132+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.132+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|27, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.133+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.134+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.135+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.135+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.135+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.135+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.136+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.136+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.136+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.137+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|28, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.137+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.138+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.138+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf3f') } 2015-04-01T16:21:33.138+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|29, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.142+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.142+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.143+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf3f') } 2015-04-01T16:21:33.144+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|30, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.145+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.146+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.146+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:33.146+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:33.147+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.147+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|31, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.147+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.148+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf40') } 2015-04-01T16:21:33.148+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf41') } 2015-04-01T16:21:33.148+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.148+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|33, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.149+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.149+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf42') } 2015-04-01T16:21:33.150+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf43') } 2015-04-01T16:21:33.150+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|35, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.155+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.156+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.156+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf40') } 2015-04-01T16:21:33.156+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|36, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.159+0000 D REPL [rsBackgroundSync] bgsync buffer has 572 bytes 2015-04-01T16:21:33.159+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.159+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:33.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf41') } 2015-04-01T16:21:33.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf42') } 2015-04-01T16:21:33.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf43') } 2015-04-01T16:21:33.160+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|39, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.160+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.161+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.162+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:33.162+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:33.162+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|40, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.162+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.164+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf44') } 2015-04-01T16:21:33.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf45') } 2015-04-01T16:21:33.165+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|42, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.165+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.165+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf46') } 2015-04-01T16:21:33.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf47') } 2015-04-01T16:21:33.166+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|44, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.171+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.171+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.171+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:33.172+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:33.172+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|45, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.177+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.177+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.178+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:33.178+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:33.179+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|46, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.180+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.180+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.181+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.181+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.181+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.181+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.181+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.182+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:33.182+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.182+0000 I INDEX [repl writer worker 15] build index done. scanned 4 total records. 0 secs 2015-04-01T16:21:33.182+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.182+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.182+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|47, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.183+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.184+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.185+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "x_1" } 2015-04-01T16:21:33.186+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:33.186+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.186+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:33.186+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:21:33.186+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|48, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.187+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:33.187+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:35.187Z 2015-04-01T16:21:33.189+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.190+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:33.190+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf44') } 2015-04-01T16:21:33.191+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf45') } 2015-04-01T16:21:33.191+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf46') } 2015-04-01T16:21:33.191+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf47') } 2015-04-01T16:21:33.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf48') } 2015-04-01T16:21:33.193+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|53, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.194+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.195+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:33.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf49') } 2015-04-01T16:21:33.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf4a') } 2015-04-01T16:21:33.196+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf4b') } 2015-04-01T16:21:33.196+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|56, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.199+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:33.199+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.200+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:33.200+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf48') } 2015-04-01T16:21:33.200+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf49') } 2015-04-01T16:21:33.200+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf4a') } 2015-04-01T16:21:33.200+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf4b') } 2015-04-01T16:21:33.201+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|60, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.202+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.203+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:33.203+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf4c') } 2015-04-01T16:21:33.203+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf4d') } 2015-04-01T16:21:33.203+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf4e') } 2015-04-01T16:21:33.204+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|63, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.205+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.206+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.206+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf4f') } 2015-04-01T16:21:33.207+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|64, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.216+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.216+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.216+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf4c') } 2015-04-01T16:21:33.216+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf4d') } 2015-04-01T16:21:33.217+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|66, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.218+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.219+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:33.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf4e') } 2015-04-01T16:21:33.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf4f') } 2015-04-01T16:21:33.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:33.220+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:33.220+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|70, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.233+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.233+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:33.234+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|71, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:33.236+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.236+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.237+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:33.237+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|72, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.242+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.242+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.243+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:33.243+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:33.243+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|74, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.250+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.250+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.251+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf50') } 2015-04-01T16:21:33.251+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|75, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.254+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.254+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.255+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf50') } 2015-04-01T16:21:33.255+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|76, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.261+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.261+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.261+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf50') } 2015-04-01T16:21:33.261+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|77, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.264+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.264+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.265+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0db5355f778169d063') } 2015-04-01T16:21:33.265+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|78, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.274+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.274+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.275+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0db5355f778169d063') } 2015-04-01T16:21:33.275+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|79, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.278+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.278+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.279+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf51') } 2015-04-01T16:21:33.279+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf52') } 2015-04-01T16:21:33.279+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|81, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.284+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.285+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.285+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf51') } 2015-04-01T16:21:33.285+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|82, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.291+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.291+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.292+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf52') } 2015-04-01T16:21:33.292+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|83, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.296+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.297+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.297+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf53') } 2015-04-01T16:21:33.297+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf54') } 2015-04-01T16:21:33.298+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|85, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.299+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.299+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.300+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf53') } 2015-04-01T16:21:33.301+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|86, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.311+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:33.312+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.312+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.312+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.312+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.312+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.312+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.312+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.312+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.313+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|87, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.322+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.323+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.323+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.323+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.324+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.324+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.324+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.324+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.324+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.324+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|88, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.324+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.326+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.326+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf55') } 2015-04-01T16:21:33.326+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|89, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.327+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.327+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.327+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf56') } 2015-04-01T16:21:33.328+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf57') } 2015-04-01T16:21:33.329+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|91, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.330+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.330+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.331+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.331+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.331+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.331+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.331+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.331+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Location_2d 2015-04-01T16:21:33.331+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.331+0000 I INDEX [repl writer worker 15] build index done. scanned 3 total records. 0 secs 2015-04-01T16:21:33.331+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.331+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.332+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|92, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.345+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.346+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.346+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.346+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.346+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.346+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.346+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.346+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.346+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.346+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.347+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|93, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.349+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.349+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.349+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.350+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.350+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.350+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.350+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.350+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.350+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.351+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|94, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.351+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.353+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:33.354+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf58') } 2015-04-01T16:21:33.354+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf59') } 2015-04-01T16:21:33.354+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf5a') } 2015-04-01T16:21:33.354+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|97, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.355+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.356+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.357+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.357+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.357+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.357+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.357+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.357+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Location_2d 2015-04-01T16:21:33.357+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.357+0000 I INDEX [repl writer worker 15] build index done. scanned 3 total records. 0 secs 2015-04-01T16:21:33.357+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.357+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.358+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|98, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.362+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.362+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.363+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf58') } 2015-04-01T16:21:33.363+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf59') } 2015-04-01T16:21:33.363+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|100, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.366+0000 D REPL [rsBackgroundSync] bgsync buffer has 111 bytes 2015-04-01T16:21:33.366+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.366+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.367+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf5a') } 2015-04-01T16:21:33.367+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf5b') } 2015-04-01T16:21:33.367+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|102, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.369+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.369+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.370+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf5b') } 2015-04-01T16:21:33.370+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf5c') } 2015-04-01T16:21:33.370+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|104, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.380+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.380+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.381+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf5c') } 2015-04-01T16:21:33.381+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|105, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.384+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.384+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.385+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf5d') } 2015-04-01T16:21:33.385+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|106, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.390+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.390+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.390+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf5d') } 2015-04-01T16:21:33.391+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf5e') } 2015-04-01T16:21:33.391+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|108, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.395+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:33.396+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.397+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.397+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf5e') } 2015-04-01T16:21:33.397+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|109, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.398+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:21:33.398+0000 D NETWORK [ReplExecNetThread-0] connected to server localhost:27017 (127.0.0.1) 2015-04-01T16:21:33.400+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.400+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.402+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf5f') } 2015-04-01T16:21:33.403+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf60') } 2015-04-01T16:21:33.403+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|111, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.405+0000 W NETWORK [ReplExecNetThread-0] The server certificate does not match the host name localhost 2015-04-01T16:21:33.407+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.407+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.415+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf5f') } 2015-04-01T16:21:33.415+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf60') } 2015-04-01T16:21:33.415+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|113, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.416+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.416+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.417+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf61') } 2015-04-01T16:21:33.417+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|114, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.428+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.428+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.428+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf61') } 2015-04-01T16:21:33.428+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|115, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.434+0000 D REPL [rsBackgroundSync] bgsync buffer has 114 bytes 2015-04-01T16:21:33.434+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.434+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.434+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf62') } 2015-04-01T16:21:33.434+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf63') } 2015-04-01T16:21:33.435+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|117, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.442+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:33.442+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:35.442Z 2015-04-01T16:21:33.443+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.443+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.443+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf62') } 2015-04-01T16:21:33.443+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf63') } 2015-04-01T16:21:33.444+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|119, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.445+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.445+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.445+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf64') } 2015-04-01T16:21:33.446+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|120, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.453+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.454+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.466+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf64') } 2015-04-01T16:21:33.466+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|121, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.467+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.467+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.468+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf65') } 2015-04-01T16:21:33.468+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf65') } 2015-04-01T16:21:33.468+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|123, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.479+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.479+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.480+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf66') } 2015-04-01T16:21:33.480+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|124, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.481+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.481+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.481+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.481+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.481+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.482+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.482+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.482+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.482+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.482+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.482+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|125, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.485+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.486+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.486+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.486+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.486+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.486+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.486+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.486+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.487+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.487+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|126, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.487+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.487+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.487+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf67') } 2015-04-01T16:21:33.487+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf68') } 2015-04-01T16:21:33.488+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|128, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.488+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.489+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.489+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf69') } 2015-04-01T16:21:33.489+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|129, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.491+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.491+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.491+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.491+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.492+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.492+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.492+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.492+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Location_2d 2015-04-01T16:21:33.492+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.492+0000 I INDEX [repl writer worker 15] build index done. scanned 3 total records. 0 secs 2015-04-01T16:21:33.492+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.492+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.493+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|130, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.500+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.501+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.501+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.501+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.501+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.501+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.501+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.501+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.502+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.502+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.502+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|131, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.504+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:33.504+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.504+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.504+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.505+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.505+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.505+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.505+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.505+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.505+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.505+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.506+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|132, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.507+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.508+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf6a') } 2015-04-01T16:21:33.508+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf6b') } 2015-04-01T16:21:33.508+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|134, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.508+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.509+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.509+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf6c') } 2015-04-01T16:21:33.510+0000 D QUERY [rsSync] local.oplog.rs: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:33.510+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.510+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|135, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.511+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.512+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.512+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.512+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.512+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.512+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.512+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Location_2d 2015-04-01T16:21:33.512+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.512+0000 I INDEX [repl writer worker 15] build index done. scanned 3 total records. 0 secs 2015-04-01T16:21:33.512+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.512+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.512+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|136, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.518+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.519+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.519+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.519+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.519+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.519+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.519+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.520+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.520+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.520+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.520+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|137, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.521+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.522+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.522+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.522+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.522+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.522+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.523+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.523+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.523+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.523+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.523+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|138, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.524+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.525+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf6d') } 2015-04-01T16:21:33.525+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf6e') } 2015-04-01T16:21:33.526+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.526+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|140, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.527+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf6f') } 2015-04-01T16:21:33.528+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|141, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.528+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.530+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.531+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.531+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.531+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.531+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.531+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.531+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Location_2d 2015-04-01T16:21:33.531+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.531+0000 I INDEX [repl writer worker 15] build index done. scanned 3 total records. 0 secs 2015-04-01T16:21:33.532+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.532+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.532+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|142, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.533+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.533+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.533+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.533+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.533+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.534+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.534+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.534+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.534+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.534+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.534+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|143, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.540+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.541+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.541+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.541+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.541+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.541+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.541+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.541+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.541+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.542+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|144, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.544+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:33.545+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.546+0000 D REPL [rsSync] replication batch size is 10 2015-04-01T16:21:33.546+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 0 } 2015-04-01T16:21:33.546+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:33.546+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:33.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:33.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:33.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:33.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:33.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 7 } 2015-04-01T16:21:33.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 8 } 2015-04-01T16:21:33.548+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 9 } 2015-04-01T16:21:33.548+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|154, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.552+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.552+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.553+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.553+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.553+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.553+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.553+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.553+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.553+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|155, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.556+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.557+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.558+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.558+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.558+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.558+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.558+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.558+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.558+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.559+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|156, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.559+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.560+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.560+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf70') } 2015-04-01T16:21:33.560+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|157, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.568+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.568+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.568+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.568+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.568+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.568+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.568+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.568+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.569+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|158, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.571+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.571+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.571+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.571+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.571+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.571+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.571+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.572+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.572+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.572+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.572+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|159, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.572+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:33.573+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.573+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf71') } 2015-04-01T16:21:33.573+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf72') } 2015-04-01T16:21:33.573+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.574+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|161, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.574+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.574+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf73') } 2015-04-01T16:21:33.575+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|162, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.579+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.579+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.580+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.580+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.580+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { Location: "geoHaystack", Type: 1 }, name: "Location_geoHaystack_Type_1", ns: "Tests04011621.testcollection", bucketSize: 1.0 } 2015-04-01T16:21:33.580+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.580+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.580+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Location_geoHaystack_Type_1 2015-04-01T16:21:33.580+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.580+0000 I INDEX [repl writer worker 15] build index done. scanned 3 total records. 0 secs 2015-04-01T16:21:33.580+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.580+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.580+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|163, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.609+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.611+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.611+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.611+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.611+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.611+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.611+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.612+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { Location: "geoHaystack", Type: 1 }, name: "Location_geoHaystack_Type_1", ns: "Tests04011621.testcollection", bucketSize: 1.0 } 2015-04-01T16:21:33.612+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.612+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.613+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|164, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.613+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.615+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.616+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.616+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.616+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.616+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.616+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.616+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.616+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.617+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.618+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|165, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.618+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:33.619+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf74') } 2015-04-01T16:21:33.619+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf75') } 2015-04-01T16:21:33.619+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf76') } 2015-04-01T16:21:33.620+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|168, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.620+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.620+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.621+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.621+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.621+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { Location: "geoHaystack", Type: 1 }, name: "Location_geoHaystack_Type_1", ns: "Tests04011621.testcollection", bucketSize: 1.0 } 2015-04-01T16:21:33.621+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.621+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.621+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Location_geoHaystack_Type_1 2015-04-01T16:21:33.621+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.621+0000 I INDEX [repl writer worker 15] build index done. scanned 3 total records. 0 secs 2015-04-01T16:21:33.621+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.621+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.622+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|169, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.631+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.632+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.632+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.633+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.633+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.633+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.633+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.633+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { Location: "geoHaystack", Type: 1 }, name: "Location_geoHaystack_Type_1", ns: "Tests04011621.testcollection", bucketSize: 1.0 } 2015-04-01T16:21:33.633+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.633+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.633+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|170, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.636+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.637+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.637+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.637+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.637+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.637+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.638+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.638+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.638+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.638+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|171, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.638+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.640+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:33.641+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf77') } 2015-04-01T16:21:33.641+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf78') } 2015-04-01T16:21:33.641+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf79') } 2015-04-01T16:21:33.641+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|174, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.642+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.642+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.642+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.643+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.643+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { Location: "geoHaystack", Type: 1 }, name: "Location_geoHaystack_Type_1", ns: "Tests04011621.testcollection", bucketSize: 1.0 } 2015-04-01T16:21:33.643+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.643+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.643+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Location_geoHaystack_Type_1 2015-04-01T16:21:33.643+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.643+0000 I INDEX [repl writer worker 15] build index done. scanned 3 total records. 0 secs 2015-04-01T16:21:33.643+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.643+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.644+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|175, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.652+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.653+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.654+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.654+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.654+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.654+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.654+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.654+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { Location: "geoHaystack", Type: 1 }, name: "Location_geoHaystack_Type_1", ns: "Tests04011621.testcollection", bucketSize: 1.0 } 2015-04-01T16:21:33.654+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.654+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.655+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|176, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.655+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:33.656+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.657+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.658+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.658+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.658+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.658+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.658+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.658+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.659+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.660+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|177, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.660+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.661+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:33.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf7a') } 2015-04-01T16:21:33.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf7b') } 2015-04-01T16:21:33.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf7c') } 2015-04-01T16:21:33.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf7d') } 2015-04-01T16:21:33.662+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf7e') } 2015-04-01T16:21:33.662+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|182, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.664+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.664+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.664+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.664+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.664+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.664+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.665+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.665+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Location_2d 2015-04-01T16:21:33.665+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.665+0000 I INDEX [repl writer worker 15] build index done. scanned 5 total records. 0 secs 2015-04-01T16:21:33.665+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.665+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.665+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|183, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.691+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.691+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.692+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.692+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.692+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.692+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.692+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.692+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.692+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.692+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.693+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|184, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.694+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.694+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.694+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.694+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.695+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.695+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.695+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.695+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.695+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.695+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|185, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.695+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.696+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.696+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf7f') } 2015-04-01T16:21:33.696+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf80') } 2015-04-01T16:21:33.696+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|187, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.700+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.701+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:33.701+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf81') } 2015-04-01T16:21:33.702+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf82') } 2015-04-01T16:21:33.702+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf83') } 2015-04-01T16:21:33.702+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|190, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.703+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.704+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.704+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.704+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.704+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.704+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.704+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.704+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Location_2d 2015-04-01T16:21:33.704+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.704+0000 I INDEX [repl writer worker 15] build index done. scanned 5 total records. 0 secs 2015-04-01T16:21:33.704+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.704+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.705+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|191, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.714+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:33.714+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.715+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.716+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.716+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.716+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.716+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.716+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.716+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.716+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.716+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.716+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|192, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.718+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.718+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.719+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.720+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.720+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.720+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.720+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.720+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.720+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.720+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|193, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.721+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.722+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:33.724+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf84') } 2015-04-01T16:21:33.724+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf85') } 2015-04-01T16:21:33.724+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf86') } 2015-04-01T16:21:33.725+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|196, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.725+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.726+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.726+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.726+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.726+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.726+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.726+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.726+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Location_2d 2015-04-01T16:21:33.726+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.726+0000 I INDEX [repl writer worker 15] build index done. scanned 3 total records. 0 secs 2015-04-01T16:21:33.727+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.727+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.727+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|197, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.735+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.736+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.737+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.737+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.737+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.737+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.737+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.737+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.737+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.737+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.737+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|198, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.739+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.739+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.740+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.740+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.741+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.741+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.741+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.741+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.741+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.741+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|199, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.742+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.742+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:33.743+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf87') } 2015-04-01T16:21:33.743+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf88') } 2015-04-01T16:21:33.743+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf89') } 2015-04-01T16:21:33.744+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|202, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.744+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.745+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.747+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.747+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.747+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.747+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.747+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.747+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Location_2d 2015-04-01T16:21:33.748+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.748+0000 I INDEX [repl writer worker 15] build index done. scanned 3 total records. 0 secs 2015-04-01T16:21:33.748+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.748+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.748+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|203, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.754+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.755+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.756+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.756+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.756+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.756+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.756+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.756+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { Location: "2d" }, name: "Location_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.756+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.756+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.756+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|204, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.763+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.763+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.763+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.764+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.764+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.764+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.764+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.764+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.764+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.765+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|205, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.765+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.766+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:33.767+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.767+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf8a') } 2015-04-01T16:21:33.767+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|206, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.767+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.768+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.768+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf8b') } 2015-04-01T16:21:33.768+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf8c') } 2015-04-01T16:21:33.768+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|208, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.769+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.770+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.770+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.771+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.771+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { Location: "2dsphere" }, name: "Location_2dsphere", ns: "Tests04011621.testcollection", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:33.771+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.771+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.771+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Location_2dsphere 2015-04-01T16:21:33.771+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.771+0000 I INDEX [repl writer worker 15] build index done. scanned 3 total records. 0 secs 2015-04-01T16:21:33.771+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.771+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.772+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|209, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.791+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.792+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.793+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.793+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.793+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.793+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.793+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.793+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { Location: "2dsphere" }, name: "Location_2dsphere", ns: "Tests04011621.testcollection", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:33.793+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.794+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.795+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|210, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.795+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.796+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.796+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.797+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.797+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.797+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.797+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.797+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.797+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.797+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|211, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.798+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.798+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.798+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf8d') } 2015-04-01T16:21:33.798+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|212, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.799+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.799+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.799+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.799+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.800+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { loc: "2d" }, name: "loc_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.800+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.800+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.800+0000 D INDEX [repl writer worker 15] bulk commit starting for index: loc_2d 2015-04-01T16:21:33.800+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.800+0000 I INDEX [repl writer worker 15] build index done. scanned 1 total records. 0 secs 2015-04-01T16:21:33.800+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.800+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.800+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|213, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.811+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.812+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.812+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:33.812+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:33.813+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:33.813+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.813+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.813+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { loc: "2d" }, name: "loc_2d", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:33.813+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.813+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:33.813+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|214, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.815+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.816+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.817+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:33.817+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:33.817+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:33.817+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.817+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.817+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:33.817+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.818+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|215, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.818+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.819+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:33.821+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf8e') } 2015-04-01T16:21:33.821+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf8f') } 2015-04-01T16:21:33.821+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf90') } 2015-04-01T16:21:33.821+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|218, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.822+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.823+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.823+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.823+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:33.823+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { Location: "2dsphere" }, name: "Location_2dsphere", ns: "Tests04011621.testcollection", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:33.823+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:33.823+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.823+0000 D INDEX [repl writer worker 15] bulk commit starting for index: Location_2dsphere 2015-04-01T16:21:33.823+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:33.823+0000 I INDEX [repl writer worker 15] build index done. scanned 3 total records. 0 secs 2015-04-01T16:21:33.824+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.824+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.824+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|219, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.827+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.828+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.828+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:33.828+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:33.828+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { Location: "2dsphere" }, name: "Location_2dsphere", ns: "Tests04011621.testcollection", 2dsphereIndexVersion: 2 } 2015-04-01T16:21:33.828+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:33.828+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|220, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.833+0000 D REPL [rsBackgroundSync] bgsync buffer has 111 bytes 2015-04-01T16:21:33.833+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.834+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:33.834+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf8e') } 2015-04-01T16:21:33.834+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf8f') } 2015-04-01T16:21:33.834+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf90') } 2015-04-01T16:21:33.834+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|223, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.875+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.877+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:33.881+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf91') } 2015-04-01T16:21:33.885+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:33.886+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011621.1, filling with zeroes... 2015-04-01T16:21:33.901+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011621.1, size: 32MB, took 0.01 secs 2015-04-01T16:21:33.902+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:16777216 fromFreeList: 0 eloc: 1:2000 2015-04-01T16:21:33.973+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:33.974+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|224, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:33.978+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:33.994+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf92') } 2015-04-01T16:21:34.002+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf93') } 2015-04-01T16:21:34.112+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:34.112+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905293000|226, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:34.115+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:34.126+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf94') } 2015-04-01T16:21:34.139+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0de15b5605d452bf95') } 2015-04-01T16:21:34.144+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bf96') } 2015-04-01T16:21:34.331+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:34.333+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:34.333+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905294000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:34.336+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bf97') } 2015-04-01T16:21:34.344+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bf98') } 2015-04-01T16:21:34.350+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bf99') } 2015-04-01T16:21:34.371+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bf9a') } 2015-04-01T16:21:34.376+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bf9b') } 2015-04-01T16:21:34.435+0000 D COMMAND [conn17] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:21:34.435+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:21:34.436+0000 D COMMAND [conn17] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:21:34.436+0000 I COMMAND [conn17] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:21:34.463+0000 D REPL [rsBackgroundSync] bgsync buffer has 2000236 bytes 2015-04-01T16:21:34.694+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:34.695+0000 D REPL [rsSync] replication batch size is 8 2015-04-01T16:21:34.696+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905294000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:34.698+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bf9c') } 2015-04-01T16:21:34.705+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bf9d') } 2015-04-01T16:21:34.727+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bf9e') } 2015-04-01T16:21:34.738+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bf9f') } 2015-04-01T16:21:34.744+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bfa0') } 2015-04-01T16:21:34.746+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:34.765+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011621.2, filling with zeroes... 2015-04-01T16:21:34.768+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011621.2, size: 64MB, took 0.003 secs 2015-04-01T16:21:34.771+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:22650880 fromFreeList: 0 eloc: 2:2000 2015-04-01T16:21:34.775+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bfa1') } 2015-04-01T16:21:34.780+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bfa2') } 2015-04-01T16:21:34.785+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bfa3') } 2015-04-01T16:21:35.023+0000 D NETWORK [conn15] SocketException: remote: 127.0.0.1:62977 error: 9001 socket exception [CLOSED] server [127.0.0.1:62977] 2015-04-01T16:21:35.023+0000 I NETWORK [conn15] end connection 127.0.0.1:62977 (3 connections now open) 2015-04-01T16:21:35.024+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:62995 #19 (4 connections now open) 2015-04-01T16:21:35.117+0000 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:35.117+0000 D COMMAND [conn18] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:35.118+0000 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:35.131+0000 D REPL [rsBackgroundSync] bgsync buffer has 9001062 bytes 2015-04-01T16:21:35.188+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:35.220+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:21:35.221+0000 D NETWORK [ReplExecNetThread-2] connected to server localhost:27019 (127.0.0.1) 2015-04-01T16:21:35.261+0000 D QUERY [conn19] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:35.261+0000 D COMMAND [conn19] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D786C335A7336654650366A41554139423942677A2F64474B523356574C476550) } 2015-04-01T16:21:35.261+0000 I COMMAND [conn19] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D786C335A7336654650366A41554139423942677A2F64474B523356574C476550) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:21:35.265+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:35.287+0000 D COMMAND [conn19] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D786C335A7336654650366A41554139423942677A2F64474B523356574C4765503973464446716B537A4A664B58326979676F4C7778675663715945544F...), conversationId: 1 } 2015-04-01T16:21:35.287+0000 I COMMAND [conn19] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D786C335A7336654650366A41554139423942677A2F64474B523356574C4765503973464446716B537A4A664B58326979676F4C7778675663715945544F...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:21:35.287+0000 D COMMAND [conn19] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:21:35.287+0000 I ACCESS [conn19] Successfully authenticated as principal __system on local 2015-04-01T16:21:35.288+0000 I COMMAND [conn19] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:21:35.288+0000 D COMMAND [conn19] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:35.288+0000 D COMMAND [conn19] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:35.287+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905294000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:35.288+0000 I COMMAND [conn19] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:35.289+0000 D REPL [rsSync] replication batch size is 13 2015-04-01T16:21:35.291+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bfa4') } 2015-04-01T16:21:35.300+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bfa5') } 2015-04-01T16:21:35.306+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bfa6') } 2015-04-01T16:21:35.318+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bfa7') } 2015-04-01T16:21:35.323+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bfa8') } 2015-04-01T16:21:35.328+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bfa9') } 2015-04-01T16:21:35.333+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0ee15b5605d452bfaa') } 2015-04-01T16:21:35.335+0000 W NETWORK [ReplExecNetThread-2] The server certificate does not match the host name localhost 2015-04-01T16:21:35.339+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfab') } 2015-04-01T16:21:35.344+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfac') } 2015-04-01T16:21:35.349+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfad') } 2015-04-01T16:21:35.354+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfae') } 2015-04-01T16:21:35.360+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfaf') } 2015-04-01T16:21:35.365+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfb0') } 2015-04-01T16:21:35.455+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:35.459+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:35.460+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:37.460Z 2015-04-01T16:21:35.574+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:35.574+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:37.574Z 2015-04-01T16:21:35.971+0000 D REPL [rsBackgroundSync] bgsync buffer has 11001298 bytes 2015-04-01T16:21:36.001+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:36.003+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905295000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:36.015+0000 D REPL [rsSync] replication batch size is 13 2015-04-01T16:21:36.016+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfb1') } 2015-04-01T16:21:36.028+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfb2') } 2015-04-01T16:21:36.036+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfb3') } 2015-04-01T16:21:36.043+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfb4') } 2015-04-01T16:21:36.050+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfb5') } 2015-04-01T16:21:36.052+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:36.053+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:30580736 fromFreeList: 0 eloc: 2:159c000 2015-04-01T16:21:36.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfb6') } 2015-04-01T16:21:36.063+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfb7') } 2015-04-01T16:21:36.069+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfb8') } 2015-04-01T16:21:36.075+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfb9') } 2015-04-01T16:21:36.080+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfba') } 2015-04-01T16:21:36.086+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfbb') } 2015-04-01T16:21:36.098+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfbc') } 2015-04-01T16:21:36.123+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b0fe15b5605d452bfbd') } 2015-04-01T16:21:36.918+0000 D REPL [rsBackgroundSync] bgsync buffer has 3001450 bytes 2015-04-01T16:21:36.996+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.001+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905295000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.003+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:37.005+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfbe') } 2015-04-01T16:21:37.014+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfbf') } 2015-04-01T16:21:37.019+0000 D REPL [rsBackgroundSync] bgsync buffer has 2783 bytes 2015-04-01T16:21:37.020+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfc0') } 2015-04-01T16:21:37.070+0000 D REPL [rsBackgroundSync] bgsync buffer has 4445 bytes 2015-04-01T16:21:37.119+0000 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:37.119+0000 D COMMAND [conn18] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:37.120+0000 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:37.127+0000 D REPL [rsBackgroundSync] bgsync buffer has 6331 bytes 2015-04-01T16:21:37.127+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.127+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.128+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.129+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:37.129+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:37.129+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:37.129+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.129+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.129+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:37.132+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.132+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.132+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.133+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:37.133+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:37.133+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:37.133+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.133+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:37.133+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:37.133+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.134+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.134+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.134+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.135+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { collMod: "testcollection", usePowerOf2Sizes: true } 2015-04-01T16:21:37.135+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.135+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.136+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.136+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:37.136+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:37.136+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:37.136+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.136+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.136+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:37.136+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.137+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.137+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.137+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:37.138+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:37.138+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:37.138+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.138+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:37.138+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:37.138+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.138+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.138+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.139+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.139+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfc1') } 2015-04-01T16:21:37.139+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.139+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.140+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.140+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:37.140+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:37.140+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:37.140+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.140+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.140+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:37.141+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.141+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.141+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.141+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:37.141+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:37.142+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:37.142+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.142+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:37.142+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:37.142+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.142+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.143+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.143+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.143+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfc2') } 2015-04-01T16:21:37.143+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.143+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.144+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.144+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:37.144+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:37.144+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:37.144+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.144+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.144+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:37.145+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.145+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.145+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.145+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:37.145+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:37.145+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:37.145+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.145+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:37.146+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:37.146+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.146+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|14, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.146+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.147+0000 D REPL [rsSync] replication batch size is 6 2015-04-01T16:21:37.147+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfc3') } 2015-04-01T16:21:37.148+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfc4') } 2015-04-01T16:21:37.148+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfc5') } 2015-04-01T16:21:37.148+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfc6') } 2015-04-01T16:21:37.148+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfc7') } 2015-04-01T16:21:37.148+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfc8') } 2015-04-01T16:21:37.148+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|20, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.148+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.149+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.149+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:37.149+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:37.149+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:37.149+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.149+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.149+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:37.149+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.150+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.150+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.150+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:37.150+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:37.150+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:37.150+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.150+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:37.150+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:37.151+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.151+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.151+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.152+0000 D REPL [rsSync] replication batch size is 6 2015-04-01T16:21:37.152+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfc9') } 2015-04-01T16:21:37.152+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfca') } 2015-04-01T16:21:37.152+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfcb') } 2015-04-01T16:21:37.152+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfcc') } 2015-04-01T16:21:37.153+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfcd') } 2015-04-01T16:21:37.153+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b10e15b5605d452bfce') } 2015-04-01T16:21:37.153+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905296000|28, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.153+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.153+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.154+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:37.154+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:37.154+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:37.154+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.154+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.154+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:37.154+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.154+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.155+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.155+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:37.155+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:37.155+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:37.156+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.156+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:37.156+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:37.156+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.156+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.156+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.157+0000 D REPL [rsSync] replication batch size is 6 2015-04-01T16:21:37.157+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfcf') } 2015-04-01T16:21:37.157+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfd0') } 2015-04-01T16:21:37.157+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfd1') } 2015-04-01T16:21:37.157+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfd2') } 2015-04-01T16:21:37.157+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfd3') } 2015-04-01T16:21:37.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfd4') } 2015-04-01T16:21:37.158+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.159+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.159+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.159+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:37.159+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:37.159+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:37.159+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.159+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.159+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:37.159+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.160+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.160+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.160+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:37.161+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:37.161+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:37.161+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.161+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:37.161+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:37.161+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.161+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.161+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.162+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.162+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfd5') } 2015-04-01T16:21:37.162+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.162+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.163+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.163+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:37.163+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:37.163+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:37.163+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.163+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.164+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:37.164+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.164+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.165+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.165+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:37.165+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:37.165+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:37.165+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.165+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:37.165+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:37.165+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.166+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.166+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.167+0000 D REPL [rsSync] replication batch size is 6 2015-04-01T16:21:37.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfd6') } 2015-04-01T16:21:37.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfd7') } 2015-04-01T16:21:37.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfd8') } 2015-04-01T16:21:37.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfd9') } 2015-04-01T16:21:37.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfda') } 2015-04-01T16:21:37.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfdb') } 2015-04-01T16:21:37.168+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.168+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.171+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.171+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:37.171+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:37.171+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:37.171+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.171+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.172+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:37.172+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|20, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.172+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.173+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.173+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:37.173+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:37.173+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:37.173+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.174+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:37.174+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:37.174+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.174+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.174+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.175+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.175+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:37.175+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:37.176+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { x: "hashed" }, name: "x_hashed", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.176+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:37.176+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.176+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_hashed 2015-04-01T16:21:37.176+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:37.176+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:37.176+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.176+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.176+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.177+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.177+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.177+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:37.177+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:37.178+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { x: "hashed" }, name: "x_hashed", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.178+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.178+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|23, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.178+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.179+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.179+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:37.179+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:37.179+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.179+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:37.180+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.180+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:37.180+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:37.180+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:37.180+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.180+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.180+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|24, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.180+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.181+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.181+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:37.182+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 2:32c6000 2015-04-01T16:21:37.182+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { y: 1 }, name: "y_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.182+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:37.182+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.182+0000 D INDEX [repl writer worker 15] bulk commit starting for index: y_1 2015-04-01T16:21:37.182+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:37.182+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:37.182+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.182+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.183+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|25, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.183+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.183+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.183+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:37.183+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:37.183+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:37.184+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.184+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.184+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.184+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.184+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { y: 1 }, name: "y_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.184+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.184+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:37.184+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.185+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|26, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.185+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.185+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:37.185+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:37.185+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:37.185+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.185+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:37.186+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 2:32c6000 2015-04-01T16:21:37.186+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.186+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|27, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.186+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.186+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.187+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:37.187+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:37.187+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, unique: true, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.187+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:37.187+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.187+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:37.187+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:37.187+0000 I INDEX [repl writer worker 15] build index done. scanned 0 total records. 0 secs 2015-04-01T16:21:37.187+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.187+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.188+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.188+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|28, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.189+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:37.189+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfdc') } 2015-04-01T16:21:37.189+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfe4') } 2015-04-01T16:21:37.189+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b11e15b5605d452bfe6') } 2015-04-01T16:21:37.190+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|31, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.190+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:37.191+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:37.191+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:37.191+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:37.191+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:37.191+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.191+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.191+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, unique: true, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:37.191+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:37.192+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:37.192+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|32, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:37.291+0000 D COMMAND [conn19] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:37.291+0000 D COMMAND [conn19] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:37.291+0000 I COMMAND [conn19] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:37.460+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:37.460+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:37.461+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:39.460Z 2015-04-01T16:21:37.574+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:37.574+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:37.575+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:39.575Z 2015-04-01T16:21:38.392+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:38.393+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:38.393+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:38.393+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:38.393+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:38.393+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:38.394+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:38.394+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:38.394+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:38.394+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:38.394+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|33, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:38.395+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:38.425+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:38.451+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:38.451+0000 I STORAGE [FileAllocator] allocating new datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011621.3, filling with zeroes... 2015-04-01T16:21:38.457+0000 I STORAGE [FileAllocator] done allocating datafile D:\jenkins\workspace\mongo-csharp-driver-test-windows\label\windows64\mc\replica_set\mo\auth\ms\30-release\mssl\ssl\artifacts\data\db27018\Tests04011621.3, size: 511MB, took 0.005 secs 2015-04-01T16:21:38.458+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:276824064 fromFreeList: 0 eloc: 3:2000 2015-04-01T16:21:38.899+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905297000|34, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:39.141+0000 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:39.141+0000 D COMMAND [conn18] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:39.142+0000 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:39.481+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:39.487+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:39.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:39.635+0000 D COMMAND [conn19] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:39.635+0000 D COMMAND [conn19] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:39.861+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:39.862+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:21:39.862+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:39.862+0000 I COMMAND [conn19] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 226ms 2015-04-01T16:21:39.863+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:39.863+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:41.863Z 2015-04-01T16:21:39.867+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:39.867+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:41.867Z 2015-04-01T16:21:40.303+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:40.341+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905298000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:40.342+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:40.397+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:40.567+0000 D COMMAND [conn16] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:21:40.574+0000 I COMMAND [conn16] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 6ms 2015-04-01T16:21:41.378+0000 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:41.378+0000 D COMMAND [conn18] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:41.378+0000 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:41.385+0000 D COMMAND [conn16] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:21:41.385+0000 I COMMAND [conn16] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:21:41.719+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:41.727+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905299000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:41.728+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:41.728+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:41.728+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:41.728+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:41.728+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:41.728+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:41.728+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:41.728+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905299000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:41.729+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:41.729+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:41.729+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:41.729+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:41.729+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:41.729+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:41.729+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:41.729+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:41.729+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:41.729+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905300000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:41.730+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:41.730+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:41.765+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:41.791+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:41.792+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:276824064 fromFreeList: 1 eloc: 3:2000 2015-04-01T16:21:41.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:41.912+0000 D COMMAND [conn19] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:41.912+0000 D COMMAND [conn19] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:41.913+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:41.913+0000 I COMMAND [conn19] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 1ms 2015-04-01T16:21:41.913+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:41.913+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:21:41.938+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:41.938+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:43.938Z 2015-04-01T16:21:41.939+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:41.939+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:43.939Z 2015-04-01T16:21:42.991+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:43.013+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905300000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:43.044+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:43.073+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:43.231+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905302000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:43.671+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:43.692+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:43.732+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:44.074+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905303000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:44.117+0000 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:44.117+0000 D COMMAND [conn18] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:44.121+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:44.122+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:44.122+0000 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 4ms 2015-04-01T16:21:44.123+0000 D COMMAND [conn19] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:44.123+0000 D COMMAND [conn19] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:44.123+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:21:44.124+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:44.124+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:46.124Z 2015-04-01T16:21:44.124+0000 I COMMAND [conn19] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 1ms 2015-04-01T16:21:44.125+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:44.125+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:46.125Z 2015-04-01T16:21:44.511+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:44.513+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:44.516+0000 D COMMAND [conn17] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:21:44.517+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:21:44.550+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:44.667+0000 D COMMAND [conn17] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:21:44.667+0000 I COMMAND [conn17] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:21:44.741+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905303000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:44.742+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:44.744+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:44.744+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:44.744+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:44.744+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:44.744+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:44.744+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:44.744+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:44.744+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905304000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:45.088+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:45.088+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:45.089+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:45.089+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:45.089+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:45.089+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:45.090+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:45.090+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:45.090+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:45.092+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905305000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:45.505+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:45.506+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:45.672+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:45.697+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:45.724+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:45.725+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:276824064 fromFreeList: 1 eloc: 3:2000 2015-04-01T16:21:45.838+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905305000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:46.201+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:46.205+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:46.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:46.394+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905305000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:46.432+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:46.432+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:46.432+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:21:46.434+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:46.434+0000 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:46.434+0000 D COMMAND [conn18] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:46.434+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:48.434Z 2015-04-01T16:21:46.434+0000 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:46.435+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:46.436+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:48.436Z 2015-04-01T16:21:46.436+0000 D COMMAND [conn19] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:46.436+0000 D COMMAND [conn19] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:46.436+0000 I COMMAND [conn19] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:47.195+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:47.198+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:47.224+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:47.381+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905306000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:47.776+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:47.777+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:47.777+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:47.778+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:47.778+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:47.778+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:47.778+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:47.778+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:47.778+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905307000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:48.180+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:48.180+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:48.180+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:48.180+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:48.180+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:48.180+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:48.181+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:48.181+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:48.181+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:48.181+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905308000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:48.426+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:48.428+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:48.501+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:48.501+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:48.512+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:48.543+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:48.543+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:276824064 fromFreeList: 1 eloc: 3:2000 2015-04-01T16:21:48.665+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905308000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:48.709+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:48.709+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:50.709Z 2015-04-01T16:21:48.711+0000 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:48.711+0000 D COMMAND [conn18] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:48.711+0000 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:48.711+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:48.712+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:50.711Z 2015-04-01T16:21:48.712+0000 D COMMAND [conn19] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:48.712+0000 D COMMAND [conn19] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:48.712+0000 I COMMAND [conn19] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:49.354+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:49.356+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:49.386+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:49.533+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905309000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:50.132+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:50.135+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:50.203+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:50.359+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905309000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:50.611+0000 D COMMAND [conn16] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:21:50.611+0000 I COMMAND [conn16] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:21:50.612+0000 D COMMAND [conn16] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:21:50.612+0000 I COMMAND [conn16] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:21:50.711+0000 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:50.711+0000 D COMMAND [conn18] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:50.713+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:50.713+0000 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 1ms 2015-04-01T16:21:50.713+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:21:50.713+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:50.713+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:50.714+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:52.713Z 2015-04-01T16:21:50.714+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:50.714+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:52.714Z 2015-04-01T16:21:50.714+0000 D COMMAND [conn19] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:50.714+0000 D COMMAND [conn19] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:50.715+0000 I COMMAND [conn19] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:51.748+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:51.798+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:51.824+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:51.985+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905311000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:52.215+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:52.232+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:52.290+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:52.487+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:52.489+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905311000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:52.489+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:52.489+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:52.489+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:52.489+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:52.489+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:52.489+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:52.489+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:52.490+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905311000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:52.713+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:52.713+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:52.713+0000 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:52.714+0000 D COMMAND [conn18] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:52.714+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:52.714+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:21:52.714+0000 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:52.714+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:54.714Z 2015-04-01T16:21:52.714+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:52.715+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:54.715Z 2015-04-01T16:21:52.717+0000 D COMMAND [conn19] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:52.718+0000 D COMMAND [conn19] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:52.718+0000 I COMMAND [conn19] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:52.981+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:52.982+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:52.983+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:52.983+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:52.984+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:52.984+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:52.985+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:52.985+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:52.985+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:52.987+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905312000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:53.017+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:53.052+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:53.091+0000 D REPL [rsBackgroundSync] bgsync buffer has 2097376 bytes 2015-04-01T16:21:53.094+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 0 } 2015-04-01T16:21:53.096+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:53.096+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:33554432 fromFreeList: 1 eloc: 2:159c000 2015-04-01T16:21:53.111+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:53.144+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905312000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:53.144+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:53.146+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:53.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:53.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:53.171+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:53.244+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:53.272+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905313000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:53.274+0000 D REPL [rsSync] replication batch size is 8 2015-04-01T16:21:53.276+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:53.283+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:53.289+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 7 } 2015-04-01T16:21:53.296+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 8 } 2015-04-01T16:21:53.302+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 9 } 2015-04-01T16:21:53.308+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:53.315+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 11 } 2015-04-01T16:21:53.321+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 12 } 2015-04-01T16:21:53.368+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:53.368+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905313000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:53.369+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:53.371+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 13 } 2015-04-01T16:21:53.379+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 14 } 2015-04-01T16:21:53.381+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:53.381+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:41287680 fromFreeList: 0 eloc: 3:10802000 2015-04-01T16:21:53.393+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905313000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:53.763+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:53.764+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:53.809+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 15 } 2015-04-01T16:21:53.829+0000 D REPL [rsBackgroundSync] bgsync buffer has 2097376 bytes 2015-04-01T16:21:53.839+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:53.853+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905313000|14, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:53.878+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:53.880+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 16 } 2015-04-01T16:21:53.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 17 } 2015-04-01T16:21:53.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 18 } 2015-04-01T16:21:53.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 19 } 2015-04-01T16:21:54.126+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:54.128+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905313000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:54.129+0000 D REPL [rsSync] replication batch size is 10 2015-04-01T16:21:54.131+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 20 } 2015-04-01T16:21:54.141+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 21 } 2015-04-01T16:21:54.149+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 22 } 2015-04-01T16:21:54.155+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 23 } 2015-04-01T16:21:54.161+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 24 } 2015-04-01T16:21:54.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 25 } 2015-04-01T16:21:54.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 26 } 2015-04-01T16:21:54.181+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 27 } 2015-04-01T16:21:54.187+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 28 } 2015-04-01T16:21:54.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 29 } 2015-04-01T16:21:54.377+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905313000|28, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:54.392+0000 D COMMAND [conn17] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:21:54.393+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:21:54.394+0000 D COMMAND [conn17] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:21:54.394+0000 I COMMAND [conn17] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:21:54.550+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:54.596+0000 D REPL [rsBackgroundSync] bgsync buffer has 2097376 bytes 2015-04-01T16:21:54.608+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:54.609+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 30 } 2015-04-01T16:21:54.649+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:54.650+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905314000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:54.651+0000 D REPL [rsSync] replication batch size is 7 2015-04-01T16:21:54.704+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 31 } 2015-04-01T16:21:54.712+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 32 } 2015-04-01T16:21:54.726+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 33 } 2015-04-01T16:21:54.731+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:54.734+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:55738368 fromFreeList: 0 eloc: 3:12f62000 2015-04-01T16:21:54.746+0000 D COMMAND [conn19] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:54.746+0000 D COMMAND [conn19] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:54.748+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 34 } 2015-04-01T16:21:54.749+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:54.749+0000 I COMMAND [conn19] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 2ms 2015-04-01T16:21:54.749+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:54.790+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:54.810+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 35 } 2015-04-01T16:21:54.816+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 36 } 2015-04-01T16:21:54.823+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 37 } 2015-04-01T16:21:54.839+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:56.839Z 2015-04-01T16:21:54.874+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:54.875+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905314000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:54.877+0000 D REPL [rsSync] replication batch size is 7 2015-04-01T16:21:54.879+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 38 } 2015-04-01T16:21:54.887+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 39 } 2015-04-01T16:21:54.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 40 } 2015-04-01T16:21:54.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 41 } 2015-04-01T16:21:54.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 42 } 2015-04-01T16:21:54.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 43 } 2015-04-01T16:21:54.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 44 } 2015-04-01T16:21:54.958+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905314000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:54.981+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:54.981+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:56.981Z 2015-04-01T16:21:54.981+0000 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:54.981+0000 D COMMAND [conn18] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:54.982+0000 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:55.044+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.045+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:55.047+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 45 } 2015-04-01T16:21:55.059+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.076+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.076+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:55.076+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:55.076+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:55.076+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:55.076+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:55.076+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.077+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:55.077+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.106+0000 D REPL [rsBackgroundSync] bgsync buffer has 107 bytes 2015-04-01T16:21:55.115+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.116+0000 D REPL [rsBackgroundSync] bgsync buffer has 1710 bytes 2015-04-01T16:21:55.116+0000 D REPL [rsBackgroundSync] bgsync buffer has 3420 bytes 2015-04-01T16:21:55.117+0000 D REPL [rsBackgroundSync] bgsync buffer has 5130 bytes 2015-04-01T16:21:55.117+0000 D REPL [rsBackgroundSync] bgsync buffer has 6840 bytes 2015-04-01T16:21:55.117+0000 D REPL [rsBackgroundSync] bgsync buffer has 8550 bytes 2015-04-01T16:21:55.125+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:55.125+0000 D REPL [rsBackgroundSync] bgsync buffer has 10260 bytes 2015-04-01T16:21:55.125+0000 D REPL [rsBackgroundSync] bgsync buffer has 11970 bytes 2015-04-01T16:21:55.126+0000 D REPL [rsBackgroundSync] bgsync buffer has 13680 bytes 2015-04-01T16:21:55.126+0000 D REPL [rsBackgroundSync] bgsync buffer has 15390 bytes 2015-04-01T16:21:55.128+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:55.129+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:55.129+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:55.129+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.129+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:55.129+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:55.129+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.131+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.132+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.133+0000 D REPL [rsBackgroundSync] bgsync buffer has 1140 bytes 2015-04-01T16:21:55.133+0000 D REPL [rsBackgroundSync] bgsync buffer has 2850 bytes 2015-04-01T16:21:55.133+0000 D REPL [rsBackgroundSync] bgsync buffer has 4560 bytes 2015-04-01T16:21:55.156+0000 D REPL [rsSync] replication batch size is 140 2015-04-01T16:21:55.157+0000 D REPL [rsBackgroundSync] bgsync buffer has 6270 bytes 2015-04-01T16:21:55.157+0000 D REPL [rsBackgroundSync] bgsync buffer has 7980 bytes 2015-04-01T16:21:55.157+0000 D REPL [rsBackgroundSync] bgsync buffer has 9690 bytes 2015-04-01T16:21:55.157+0000 D REPL [rsBackgroundSync] bgsync buffer has 11400 bytes 2015-04-01T16:21:55.157+0000 D REPL [rsBackgroundSync] bgsync buffer has 13110 bytes 2015-04-01T16:21:55.157+0000 D REPL [rsBackgroundSync] bgsync buffer has 14820 bytes 2015-04-01T16:21:55.157+0000 D REPL [rsBackgroundSync] bgsync buffer has 16530 bytes 2015-04-01T16:21:55.157+0000 D REPL [rsBackgroundSync] bgsync buffer has 18240 bytes 2015-04-01T16:21:55.157+0000 D REPL [rsBackgroundSync] bgsync buffer has 19950 bytes 2015-04-01T16:21:55.157+0000 D REPL [rsBackgroundSync] bgsync buffer has 21660 bytes 2015-04-01T16:21:55.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bfe8') } 2015-04-01T16:21:55.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bfe9') } 2015-04-01T16:21:55.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bfea') } 2015-04-01T16:21:55.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bfeb') } 2015-04-01T16:21:55.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bfec') } 2015-04-01T16:21:55.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bfed') } 2015-04-01T16:21:55.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bfee') } 2015-04-01T16:21:55.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bfef') } 2015-04-01T16:21:55.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bff0') } 2015-04-01T16:21:55.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bff1') } 2015-04-01T16:21:55.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bff2') } 2015-04-01T16:21:55.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bff3') } 2015-04-01T16:21:55.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bff4') } 2015-04-01T16:21:55.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bff5') } 2015-04-01T16:21:55.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bff6') } 2015-04-01T16:21:55.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bff7') } 2015-04-01T16:21:55.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bff8') } 2015-04-01T16:21:55.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bff9') } 2015-04-01T16:21:55.161+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bffa') } 2015-04-01T16:21:55.161+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bffb') } 2015-04-01T16:21:55.161+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bffc') } 2015-04-01T16:21:55.161+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bffd') } 2015-04-01T16:21:55.161+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bffe') } 2015-04-01T16:21:55.161+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452bfff') } 2015-04-01T16:21:55.161+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c000') } 2015-04-01T16:21:55.162+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c001') } 2015-04-01T16:21:55.162+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c002') } 2015-04-01T16:21:55.162+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c003') } 2015-04-01T16:21:55.162+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c004') } 2015-04-01T16:21:55.162+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c005') } 2015-04-01T16:21:55.162+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c006') } 2015-04-01T16:21:55.162+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c007') } 2015-04-01T16:21:55.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c008') } 2015-04-01T16:21:55.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c009') } 2015-04-01T16:21:55.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c00a') } 2015-04-01T16:21:55.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c00b') } 2015-04-01T16:21:55.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c00c') } 2015-04-01T16:21:55.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c00d') } 2015-04-01T16:21:55.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c00e') } 2015-04-01T16:21:55.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c00f') } 2015-04-01T16:21:55.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c010') } 2015-04-01T16:21:55.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c011') } 2015-04-01T16:21:55.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c012') } 2015-04-01T16:21:55.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c013') } 2015-04-01T16:21:55.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c014') } 2015-04-01T16:21:55.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c015') } 2015-04-01T16:21:55.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c016') } 2015-04-01T16:21:55.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c017') } 2015-04-01T16:21:55.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c018') } 2015-04-01T16:21:55.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c019') } 2015-04-01T16:21:55.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c01a') } 2015-04-01T16:21:55.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c01b') } 2015-04-01T16:21:55.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c01c') } 2015-04-01T16:21:55.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c01d') } 2015-04-01T16:21:55.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c01e') } 2015-04-01T16:21:55.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c01f') } 2015-04-01T16:21:55.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c020') } 2015-04-01T16:21:55.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c021') } 2015-04-01T16:21:55.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c022') } 2015-04-01T16:21:55.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c023') } 2015-04-01T16:21:55.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c024') } 2015-04-01T16:21:55.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c025') } 2015-04-01T16:21:55.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c026') } 2015-04-01T16:21:55.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c027') } 2015-04-01T16:21:55.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c028') } 2015-04-01T16:21:55.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c029') } 2015-04-01T16:21:55.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c02a') } 2015-04-01T16:21:55.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c02b') } 2015-04-01T16:21:55.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c02c') } 2015-04-01T16:21:55.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c02d') } 2015-04-01T16:21:55.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c02e') } 2015-04-01T16:21:55.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c02f') } 2015-04-01T16:21:55.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c030') } 2015-04-01T16:21:55.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c031') } 2015-04-01T16:21:55.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c032') } 2015-04-01T16:21:55.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c033') } 2015-04-01T16:21:55.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c034') } 2015-04-01T16:21:55.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c035') } 2015-04-01T16:21:55.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c036') } 2015-04-01T16:21:55.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c037') } 2015-04-01T16:21:55.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c038') } 2015-04-01T16:21:55.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c039') } 2015-04-01T16:21:55.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c03a') } 2015-04-01T16:21:55.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c03b') } 2015-04-01T16:21:55.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c03c') } 2015-04-01T16:21:55.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c03d') } 2015-04-01T16:21:55.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c03e') } 2015-04-01T16:21:55.171+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c03f') } 2015-04-01T16:21:55.171+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c040') } 2015-04-01T16:21:55.171+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c041') } 2015-04-01T16:21:55.171+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c042') } 2015-04-01T16:21:55.171+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c043') } 2015-04-01T16:21:55.171+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c044') } 2015-04-01T16:21:55.171+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c045') } 2015-04-01T16:21:55.171+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c046') } 2015-04-01T16:21:55.172+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c047') } 2015-04-01T16:21:55.172+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c048') } 2015-04-01T16:21:55.172+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c049') } 2015-04-01T16:21:55.172+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c04a') } 2015-04-01T16:21:55.172+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c04b') } 2015-04-01T16:21:55.172+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c04c') } 2015-04-01T16:21:55.172+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c04d') } 2015-04-01T16:21:55.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c04e') } 2015-04-01T16:21:55.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c04f') } 2015-04-01T16:21:55.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c050') } 2015-04-01T16:21:55.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c051') } 2015-04-01T16:21:55.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c052') } 2015-04-01T16:21:55.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c053') } 2015-04-01T16:21:55.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c054') } 2015-04-01T16:21:55.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c055') } 2015-04-01T16:21:55.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c056') } 2015-04-01T16:21:55.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c057') } 2015-04-01T16:21:55.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c058') } 2015-04-01T16:21:55.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c059') } 2015-04-01T16:21:55.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c05a') } 2015-04-01T16:21:55.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c05b') } 2015-04-01T16:21:55.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c05c') } 2015-04-01T16:21:55.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c05d') } 2015-04-01T16:21:55.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c05e') } 2015-04-01T16:21:55.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c05f') } 2015-04-01T16:21:55.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c060') } 2015-04-01T16:21:55.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c061') } 2015-04-01T16:21:55.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c062') } 2015-04-01T16:21:55.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c063') } 2015-04-01T16:21:55.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c064') } 2015-04-01T16:21:55.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c065') } 2015-04-01T16:21:55.176+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:55.176+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:32768 fromFreeList: 0 eloc: 3:1648a000 2015-04-01T16:21:55.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c066') } 2015-04-01T16:21:55.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c067') } 2015-04-01T16:21:55.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c068') } 2015-04-01T16:21:55.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c069') } 2015-04-01T16:21:55.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c06a') } 2015-04-01T16:21:55.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c06b') } 2015-04-01T16:21:55.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c06c') } 2015-04-01T16:21:55.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c06d') } 2015-04-01T16:21:55.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c06e') } 2015-04-01T16:21:55.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c06f') } 2015-04-01T16:21:55.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c070') } 2015-04-01T16:21:55.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c071') } 2015-04-01T16:21:55.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c072') } 2015-04-01T16:21:55.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c073') } 2015-04-01T16:21:55.180+0000 D REPL [rsBackgroundSync] bgsync buffer has 23370 bytes 2015-04-01T16:21:55.180+0000 D REPL [rsBackgroundSync] bgsync buffer has 25080 bytes 2015-04-01T16:21:55.180+0000 D REPL [rsBackgroundSync] bgsync buffer has 26790 bytes 2015-04-01T16:21:55.180+0000 D REPL [rsBackgroundSync] bgsync buffer has 28500 bytes 2015-04-01T16:21:55.180+0000 D REPL [rsBackgroundSync] bgsync buffer has 30210 bytes 2015-04-01T16:21:55.180+0000 D REPL [rsBackgroundSync] bgsync buffer has 31920 bytes 2015-04-01T16:21:55.180+0000 D REPL [rsBackgroundSync] bgsync buffer has 33630 bytes 2015-04-01T16:21:55.180+0000 D REPL [rsBackgroundSync] bgsync buffer has 35340 bytes 2015-04-01T16:21:55.180+0000 D REPL [rsBackgroundSync] bgsync buffer has 37050 bytes 2015-04-01T16:21:55.180+0000 D REPL [rsBackgroundSync] bgsync buffer has 38760 bytes 2015-04-01T16:21:55.216+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.216+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|143, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.235+0000 D REPL [rsBackgroundSync] bgsync buffer has 912 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 2622 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 4332 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 6042 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 7752 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 9462 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 11172 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 12882 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 14592 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 16302 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 18012 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 19722 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 21432 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 23142 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 24852 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 26562 bytes 2015-04-01T16:21:55.236+0000 D REPL [rsBackgroundSync] bgsync buffer has 28272 bytes 2015-04-01T16:21:55.237+0000 D REPL [rsBackgroundSync] bgsync buffer has 29982 bytes 2015-04-01T16:21:55.237+0000 D REPL [rsBackgroundSync] bgsync buffer has 31692 bytes 2015-04-01T16:21:55.237+0000 D REPL [rsBackgroundSync] bgsync buffer has 33402 bytes 2015-04-01T16:21:55.237+0000 D REPL [rsBackgroundSync] bgsync buffer has 35112 bytes 2015-04-01T16:21:55.237+0000 D REPL [rsBackgroundSync] bgsync buffer has 36822 bytes 2015-04-01T16:21:55.237+0000 D REPL [rsBackgroundSync] bgsync buffer has 38532 bytes 2015-04-01T16:21:55.237+0000 D REPL [rsBackgroundSync] bgsync buffer has 40242 bytes 2015-04-01T16:21:55.259+0000 D REPL [rsSync] replication batch size is 347 2015-04-01T16:21:55.260+0000 D REPL [rsBackgroundSync] bgsync buffer has 41952 bytes 2015-04-01T16:21:55.260+0000 D REPL [rsBackgroundSync] bgsync buffer has 43662 bytes 2015-04-01T16:21:55.260+0000 D REPL [rsBackgroundSync] bgsync buffer has 45372 bytes 2015-04-01T16:21:55.260+0000 D REPL [rsBackgroundSync] bgsync buffer has 47082 bytes 2015-04-01T16:21:55.261+0000 D REPL [rsBackgroundSync] bgsync buffer has 48792 bytes 2015-04-01T16:21:55.261+0000 D REPL [rsBackgroundSync] bgsync buffer has 50502 bytes 2015-04-01T16:21:55.261+0000 D REPL [rsBackgroundSync] bgsync buffer has 52212 bytes 2015-04-01T16:21:55.261+0000 D REPL [rsBackgroundSync] bgsync buffer has 53922 bytes 2015-04-01T16:21:55.261+0000 D REPL [rsBackgroundSync] bgsync buffer has 55632 bytes 2015-04-01T16:21:55.261+0000 D REPL [rsBackgroundSync] bgsync buffer has 57342 bytes 2015-04-01T16:21:55.265+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c074') } 2015-04-01T16:21:55.265+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c075') } 2015-04-01T16:21:55.265+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c076') } 2015-04-01T16:21:55.265+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c077') } 2015-04-01T16:21:55.265+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c078') } 2015-04-01T16:21:55.265+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c079') } 2015-04-01T16:21:55.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c07a') } 2015-04-01T16:21:55.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c07b') } 2015-04-01T16:21:55.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c07c') } 2015-04-01T16:21:55.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c07d') } 2015-04-01T16:21:55.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c07e') } 2015-04-01T16:21:55.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c07f') } 2015-04-01T16:21:55.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c080') } 2015-04-01T16:21:55.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c081') } 2015-04-01T16:21:55.267+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c082') } 2015-04-01T16:21:55.267+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c083') } 2015-04-01T16:21:55.267+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c084') } 2015-04-01T16:21:55.267+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c085') } 2015-04-01T16:21:55.267+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c086') } 2015-04-01T16:21:55.267+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c087') } 2015-04-01T16:21:55.267+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c088') } 2015-04-01T16:21:55.268+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c089') } 2015-04-01T16:21:55.268+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c08a') } 2015-04-01T16:21:55.268+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c08b') } 2015-04-01T16:21:55.268+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c08c') } 2015-04-01T16:21:55.268+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c08d') } 2015-04-01T16:21:55.268+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c08e') } 2015-04-01T16:21:55.268+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c08f') } 2015-04-01T16:21:55.269+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c090') } 2015-04-01T16:21:55.269+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c091') } 2015-04-01T16:21:55.269+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c092') } 2015-04-01T16:21:55.269+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c093') } 2015-04-01T16:21:55.269+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c094') } 2015-04-01T16:21:55.269+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c095') } 2015-04-01T16:21:55.269+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c096') } 2015-04-01T16:21:55.269+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c097') } 2015-04-01T16:21:55.270+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c098') } 2015-04-01T16:21:55.270+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c099') } 2015-04-01T16:21:55.270+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c09a') } 2015-04-01T16:21:55.270+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c09b') } 2015-04-01T16:21:55.270+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c09c') } 2015-04-01T16:21:55.270+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c09d') } 2015-04-01T16:21:55.270+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c09e') } 2015-04-01T16:21:55.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c09f') } 2015-04-01T16:21:55.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0a0') } 2015-04-01T16:21:55.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0a1') } 2015-04-01T16:21:55.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0a2') } 2015-04-01T16:21:55.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0a3') } 2015-04-01T16:21:55.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0a4') } 2015-04-01T16:21:55.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0a5') } 2015-04-01T16:21:55.272+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0a6') } 2015-04-01T16:21:55.272+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0a7') } 2015-04-01T16:21:55.272+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0a8') } 2015-04-01T16:21:55.272+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0a9') } 2015-04-01T16:21:55.272+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0aa') } 2015-04-01T16:21:55.272+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0ab') } 2015-04-01T16:21:55.272+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0ac') } 2015-04-01T16:21:55.272+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0ad') } 2015-04-01T16:21:55.273+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0ae') } 2015-04-01T16:21:55.273+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0af') } 2015-04-01T16:21:55.273+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0b0') } 2015-04-01T16:21:55.273+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0b1') } 2015-04-01T16:21:55.273+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0b2') } 2015-04-01T16:21:55.273+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0b3') } 2015-04-01T16:21:55.274+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0b4') } 2015-04-01T16:21:55.274+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0b5') } 2015-04-01T16:21:55.274+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0b6') } 2015-04-01T16:21:55.274+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0b7') } 2015-04-01T16:21:55.274+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0b8') } 2015-04-01T16:21:55.274+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0b9') } 2015-04-01T16:21:55.274+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0ba') } 2015-04-01T16:21:55.275+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0bb') } 2015-04-01T16:21:55.275+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0bc') } 2015-04-01T16:21:55.275+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0bd') } 2015-04-01T16:21:55.275+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0be') } 2015-04-01T16:21:55.275+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0bf') } 2015-04-01T16:21:55.275+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0c0') } 2015-04-01T16:21:55.275+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0c1') } 2015-04-01T16:21:55.276+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0c2') } 2015-04-01T16:21:55.276+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0c3') } 2015-04-01T16:21:55.276+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0c4') } 2015-04-01T16:21:55.276+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0c5') } 2015-04-01T16:21:55.276+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0c6') } 2015-04-01T16:21:55.276+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0c7') } 2015-04-01T16:21:55.277+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0c8') } 2015-04-01T16:21:55.277+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0c9') } 2015-04-01T16:21:55.277+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0ca') } 2015-04-01T16:21:55.277+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0cb') } 2015-04-01T16:21:55.277+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0cc') } 2015-04-01T16:21:55.277+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0cd') } 2015-04-01T16:21:55.277+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0ce') } 2015-04-01T16:21:55.277+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0cf') } 2015-04-01T16:21:55.278+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0d0') } 2015-04-01T16:21:55.278+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0d1') } 2015-04-01T16:21:55.278+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0d2') } 2015-04-01T16:21:55.278+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0d3') } 2015-04-01T16:21:55.278+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0d4') } 2015-04-01T16:21:55.278+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0d5') } 2015-04-01T16:21:55.278+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0d6') } 2015-04-01T16:21:55.279+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0d7') } 2015-04-01T16:21:55.279+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0d8') } 2015-04-01T16:21:55.279+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0d9') } 2015-04-01T16:21:55.279+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0da') } 2015-04-01T16:21:55.279+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0db') } 2015-04-01T16:21:55.279+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0dc') } 2015-04-01T16:21:55.279+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0dd') } 2015-04-01T16:21:55.280+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0de') } 2015-04-01T16:21:55.280+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0df') } 2015-04-01T16:21:55.280+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0e0') } 2015-04-01T16:21:55.280+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0e1') } 2015-04-01T16:21:55.280+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0e2') } 2015-04-01T16:21:55.280+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0e3') } 2015-04-01T16:21:55.280+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0e4') } 2015-04-01T16:21:55.281+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0e5') } 2015-04-01T16:21:55.281+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0e6') } 2015-04-01T16:21:55.281+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0e7') } 2015-04-01T16:21:55.281+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0e8') } 2015-04-01T16:21:55.281+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0e9') } 2015-04-01T16:21:55.281+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0ea') } 2015-04-01T16:21:55.281+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0eb') } 2015-04-01T16:21:55.282+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0ec') } 2015-04-01T16:21:55.282+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0ed') } 2015-04-01T16:21:55.282+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0ee') } 2015-04-01T16:21:55.282+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0ef') } 2015-04-01T16:21:55.282+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0f0') } 2015-04-01T16:21:55.282+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0f1') } 2015-04-01T16:21:55.282+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0f2') } 2015-04-01T16:21:55.283+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0f3') } 2015-04-01T16:21:55.283+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0f4') } 2015-04-01T16:21:55.283+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0f5') } 2015-04-01T16:21:55.283+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0f6') } 2015-04-01T16:21:55.283+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0f7') } 2015-04-01T16:21:55.283+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0f8') } 2015-04-01T16:21:55.283+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0f9') } 2015-04-01T16:21:55.284+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0fa') } 2015-04-01T16:21:55.284+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0fb') } 2015-04-01T16:21:55.284+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0fc') } 2015-04-01T16:21:55.284+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0fd') } 2015-04-01T16:21:55.284+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0fe') } 2015-04-01T16:21:55.284+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c0ff') } 2015-04-01T16:21:55.284+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c100') } 2015-04-01T16:21:55.284+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c101') } 2015-04-01T16:21:55.285+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c102') } 2015-04-01T16:21:55.285+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c103') } 2015-04-01T16:21:55.285+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c104') } 2015-04-01T16:21:55.285+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c105') } 2015-04-01T16:21:55.285+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c106') } 2015-04-01T16:21:55.286+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c107') } 2015-04-01T16:21:55.286+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c108') } 2015-04-01T16:21:55.286+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c109') } 2015-04-01T16:21:55.286+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c10a') } 2015-04-01T16:21:55.286+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c10b') } 2015-04-01T16:21:55.286+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c10c') } 2015-04-01T16:21:55.286+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c10d') } 2015-04-01T16:21:55.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c10e') } 2015-04-01T16:21:55.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c10f') } 2015-04-01T16:21:55.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c110') } 2015-04-01T16:21:55.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c111') } 2015-04-01T16:21:55.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c112') } 2015-04-01T16:21:55.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c113') } 2015-04-01T16:21:55.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c114') } 2015-04-01T16:21:55.288+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c115') } 2015-04-01T16:21:55.288+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c116') } 2015-04-01T16:21:55.288+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c117') } 2015-04-01T16:21:55.288+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c118') } 2015-04-01T16:21:55.288+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c119') } 2015-04-01T16:21:55.288+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c11a') } 2015-04-01T16:21:55.288+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c11b') } 2015-04-01T16:21:55.288+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c11c') } 2015-04-01T16:21:55.289+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c11d') } 2015-04-01T16:21:55.289+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c11e') } 2015-04-01T16:21:55.289+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c11f') } 2015-04-01T16:21:55.289+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c120') } 2015-04-01T16:21:55.289+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c121') } 2015-04-01T16:21:55.289+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c122') } 2015-04-01T16:21:55.289+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c123') } 2015-04-01T16:21:55.290+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c124') } 2015-04-01T16:21:55.290+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c125') } 2015-04-01T16:21:55.290+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c126') } 2015-04-01T16:21:55.290+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c127') } 2015-04-01T16:21:55.290+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c128') } 2015-04-01T16:21:55.290+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c129') } 2015-04-01T16:21:55.290+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c12a') } 2015-04-01T16:21:55.291+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c12b') } 2015-04-01T16:21:55.291+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c12c') } 2015-04-01T16:21:55.291+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c12d') } 2015-04-01T16:21:55.291+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c12e') } 2015-04-01T16:21:55.291+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c12f') } 2015-04-01T16:21:55.291+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c130') } 2015-04-01T16:21:55.291+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c131') } 2015-04-01T16:21:55.292+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c132') } 2015-04-01T16:21:55.292+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c133') } 2015-04-01T16:21:55.292+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c134') } 2015-04-01T16:21:55.292+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c135') } 2015-04-01T16:21:55.292+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c136') } 2015-04-01T16:21:55.292+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c137') } 2015-04-01T16:21:55.292+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c138') } 2015-04-01T16:21:55.293+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c139') } 2015-04-01T16:21:55.293+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c13a') } 2015-04-01T16:21:55.293+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c13b') } 2015-04-01T16:21:55.293+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c13c') } 2015-04-01T16:21:55.293+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c13d') } 2015-04-01T16:21:55.293+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c13e') } 2015-04-01T16:21:55.293+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c13f') } 2015-04-01T16:21:55.294+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c140') } 2015-04-01T16:21:55.294+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c141') } 2015-04-01T16:21:55.294+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c142') } 2015-04-01T16:21:55.294+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c143') } 2015-04-01T16:21:55.294+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c144') } 2015-04-01T16:21:55.294+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c145') } 2015-04-01T16:21:55.294+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c146') } 2015-04-01T16:21:55.295+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c147') } 2015-04-01T16:21:55.295+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c148') } 2015-04-01T16:21:55.295+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c149') } 2015-04-01T16:21:55.295+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c14a') } 2015-04-01T16:21:55.295+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c14b') } 2015-04-01T16:21:55.295+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c14c') } 2015-04-01T16:21:55.295+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c14d') } 2015-04-01T16:21:55.296+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c14e') } 2015-04-01T16:21:55.296+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c14f') } 2015-04-01T16:21:55.296+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c150') } 2015-04-01T16:21:55.296+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c151') } 2015-04-01T16:21:55.296+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c152') } 2015-04-01T16:21:55.296+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c153') } 2015-04-01T16:21:55.296+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c154') } 2015-04-01T16:21:55.297+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c155') } 2015-04-01T16:21:55.297+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c156') } 2015-04-01T16:21:55.297+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c157') } 2015-04-01T16:21:55.297+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c158') } 2015-04-01T16:21:55.297+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c159') } 2015-04-01T16:21:55.297+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c15a') } 2015-04-01T16:21:55.297+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c15b') } 2015-04-01T16:21:55.297+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c15c') } 2015-04-01T16:21:55.298+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c15d') } 2015-04-01T16:21:55.298+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c15e') } 2015-04-01T16:21:55.298+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c15f') } 2015-04-01T16:21:55.298+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c160') } 2015-04-01T16:21:55.298+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c161') } 2015-04-01T16:21:55.298+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c162') } 2015-04-01T16:21:55.298+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c163') } 2015-04-01T16:21:55.299+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c164') } 2015-04-01T16:21:55.299+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c165') } 2015-04-01T16:21:55.299+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c166') } 2015-04-01T16:21:55.299+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c167') } 2015-04-01T16:21:55.299+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c168') } 2015-04-01T16:21:55.299+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c169') } 2015-04-01T16:21:55.300+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c16a') } 2015-04-01T16:21:55.300+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c16b') } 2015-04-01T16:21:55.300+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c16c') } 2015-04-01T16:21:55.300+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c16d') } 2015-04-01T16:21:55.300+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c16e') } 2015-04-01T16:21:55.300+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c16f') } 2015-04-01T16:21:55.300+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c170') } 2015-04-01T16:21:55.300+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c171') } 2015-04-01T16:21:55.301+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c172') } 2015-04-01T16:21:55.301+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c173') } 2015-04-01T16:21:55.301+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c174') } 2015-04-01T16:21:55.301+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c175') } 2015-04-01T16:21:55.301+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c176') } 2015-04-01T16:21:55.301+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c177') } 2015-04-01T16:21:55.301+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c178') } 2015-04-01T16:21:55.302+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c179') } 2015-04-01T16:21:55.302+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c17a') } 2015-04-01T16:21:55.302+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c17b') } 2015-04-01T16:21:55.302+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c17c') } 2015-04-01T16:21:55.302+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c17d') } 2015-04-01T16:21:55.302+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c17e') } 2015-04-01T16:21:55.302+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c17f') } 2015-04-01T16:21:55.303+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c180') } 2015-04-01T16:21:55.303+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c181') } 2015-04-01T16:21:55.303+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c182') } 2015-04-01T16:21:55.303+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c183') } 2015-04-01T16:21:55.303+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c184') } 2015-04-01T16:21:55.303+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c185') } 2015-04-01T16:21:55.303+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c186') } 2015-04-01T16:21:55.303+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c187') } 2015-04-01T16:21:55.304+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c188') } 2015-04-01T16:21:55.304+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c189') } 2015-04-01T16:21:55.304+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c18a') } 2015-04-01T16:21:55.304+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c18b') } 2015-04-01T16:21:55.304+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c18c') } 2015-04-01T16:21:55.304+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c18d') } 2015-04-01T16:21:55.304+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c18e') } 2015-04-01T16:21:55.305+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c18f') } 2015-04-01T16:21:55.305+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c190') } 2015-04-01T16:21:55.305+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c191') } 2015-04-01T16:21:55.305+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c192') } 2015-04-01T16:21:55.305+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c193') } 2015-04-01T16:21:55.305+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c194') } 2015-04-01T16:21:55.305+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c195') } 2015-04-01T16:21:55.306+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c196') } 2015-04-01T16:21:55.306+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c197') } 2015-04-01T16:21:55.306+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c198') } 2015-04-01T16:21:55.306+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c199') } 2015-04-01T16:21:55.306+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c19a') } 2015-04-01T16:21:55.306+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c19b') } 2015-04-01T16:21:55.306+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c19c') } 2015-04-01T16:21:55.307+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c19d') } 2015-04-01T16:21:55.307+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c19e') } 2015-04-01T16:21:55.307+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c19f') } 2015-04-01T16:21:55.307+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1a0') } 2015-04-01T16:21:55.307+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1a1') } 2015-04-01T16:21:55.307+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1a2') } 2015-04-01T16:21:55.307+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1a3') } 2015-04-01T16:21:55.307+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1a4') } 2015-04-01T16:21:55.308+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1a5') } 2015-04-01T16:21:55.308+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1a6') } 2015-04-01T16:21:55.308+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1a7') } 2015-04-01T16:21:55.308+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1a8') } 2015-04-01T16:21:55.308+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1a9') } 2015-04-01T16:21:55.308+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1aa') } 2015-04-01T16:21:55.308+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1ab') } 2015-04-01T16:21:55.309+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1ac') } 2015-04-01T16:21:55.309+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1ad') } 2015-04-01T16:21:55.309+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1ae') } 2015-04-01T16:21:55.309+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1af') } 2015-04-01T16:21:55.309+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1b0') } 2015-04-01T16:21:55.309+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1b1') } 2015-04-01T16:21:55.309+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1b2') } 2015-04-01T16:21:55.310+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1b3') } 2015-04-01T16:21:55.310+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1b4') } 2015-04-01T16:21:55.310+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1b5') } 2015-04-01T16:21:55.310+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1b6') } 2015-04-01T16:21:55.310+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1b7') } 2015-04-01T16:21:55.310+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1b8') } 2015-04-01T16:21:55.310+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1b9') } 2015-04-01T16:21:55.311+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1ba') } 2015-04-01T16:21:55.311+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1bb') } 2015-04-01T16:21:55.311+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1bc') } 2015-04-01T16:21:55.311+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1bd') } 2015-04-01T16:21:55.311+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1be') } 2015-04-01T16:21:55.311+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1bf') } 2015-04-01T16:21:55.311+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1c0') } 2015-04-01T16:21:55.312+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1c1') } 2015-04-01T16:21:55.312+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1c2') } 2015-04-01T16:21:55.312+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1c3') } 2015-04-01T16:21:55.312+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1c4') } 2015-04-01T16:21:55.312+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1c5') } 2015-04-01T16:21:55.312+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1c6') } 2015-04-01T16:21:55.312+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1c7') } 2015-04-01T16:21:55.313+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1c8') } 2015-04-01T16:21:55.313+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1c9') } 2015-04-01T16:21:55.313+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1ca') } 2015-04-01T16:21:55.313+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1cb') } 2015-04-01T16:21:55.313+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1cc') } 2015-04-01T16:21:55.313+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1cd') } 2015-04-01T16:21:55.313+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1ce') } 2015-04-01T16:21:55.319+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.321+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|490, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.331+0000 D REPL [rsSync] replication batch size is 513 2015-04-01T16:21:55.332+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1cf') } 2015-04-01T16:21:55.332+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1d0') } 2015-04-01T16:21:55.332+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1d1') } 2015-04-01T16:21:55.332+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1d2') } 2015-04-01T16:21:55.332+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1d3') } 2015-04-01T16:21:55.332+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1d4') } 2015-04-01T16:21:55.332+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1d5') } 2015-04-01T16:21:55.333+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1d6') } 2015-04-01T16:21:55.333+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1d7') } 2015-04-01T16:21:55.333+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1d8') } 2015-04-01T16:21:55.333+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1d9') } 2015-04-01T16:21:55.333+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1da') } 2015-04-01T16:21:55.333+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1db') } 2015-04-01T16:21:55.333+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1dc') } 2015-04-01T16:21:55.334+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1dd') } 2015-04-01T16:21:55.334+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1de') } 2015-04-01T16:21:55.334+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1df') } 2015-04-01T16:21:55.334+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1e0') } 2015-04-01T16:21:55.334+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1e1') } 2015-04-01T16:21:55.334+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1e2') } 2015-04-01T16:21:55.334+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1e3') } 2015-04-01T16:21:55.334+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1e4') } 2015-04-01T16:21:55.335+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1e5') } 2015-04-01T16:21:55.335+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1e6') } 2015-04-01T16:21:55.335+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1e7') } 2015-04-01T16:21:55.335+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1e8') } 2015-04-01T16:21:55.335+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1e9') } 2015-04-01T16:21:55.335+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1ea') } 2015-04-01T16:21:55.335+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1eb') } 2015-04-01T16:21:55.335+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1ec') } 2015-04-01T16:21:55.335+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1ed') } 2015-04-01T16:21:55.336+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1ee') } 2015-04-01T16:21:55.336+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1ef') } 2015-04-01T16:21:55.336+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1f0') } 2015-04-01T16:21:55.336+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1f1') } 2015-04-01T16:21:55.336+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1f2') } 2015-04-01T16:21:55.336+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1f3') } 2015-04-01T16:21:55.336+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1f4') } 2015-04-01T16:21:55.336+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1f5') } 2015-04-01T16:21:55.337+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1f6') } 2015-04-01T16:21:55.337+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1f7') } 2015-04-01T16:21:55.337+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1f8') } 2015-04-01T16:21:55.337+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1f9') } 2015-04-01T16:21:55.337+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1fa') } 2015-04-01T16:21:55.337+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1fb') } 2015-04-01T16:21:55.338+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1fc') } 2015-04-01T16:21:55.338+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1fd') } 2015-04-01T16:21:55.338+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1fe') } 2015-04-01T16:21:55.338+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c1ff') } 2015-04-01T16:21:55.339+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c200') } 2015-04-01T16:21:55.339+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c201') } 2015-04-01T16:21:55.339+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c202') } 2015-04-01T16:21:55.339+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c203') } 2015-04-01T16:21:55.339+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c204') } 2015-04-01T16:21:55.340+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c205') } 2015-04-01T16:21:55.340+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c206') } 2015-04-01T16:21:55.340+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c207') } 2015-04-01T16:21:55.340+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c208') } 2015-04-01T16:21:55.340+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c209') } 2015-04-01T16:21:55.341+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c20a') } 2015-04-01T16:21:55.341+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c20b') } 2015-04-01T16:21:55.341+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c20c') } 2015-04-01T16:21:55.341+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c20d') } 2015-04-01T16:21:55.341+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c20e') } 2015-04-01T16:21:55.341+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c20f') } 2015-04-01T16:21:55.341+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c210') } 2015-04-01T16:21:55.342+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c211') } 2015-04-01T16:21:55.342+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c212') } 2015-04-01T16:21:55.342+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c213') } 2015-04-01T16:21:55.342+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c214') } 2015-04-01T16:21:55.342+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c215') } 2015-04-01T16:21:55.342+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c216') } 2015-04-01T16:21:55.342+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c217') } 2015-04-01T16:21:55.343+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c218') } 2015-04-01T16:21:55.343+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c219') } 2015-04-01T16:21:55.343+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c21a') } 2015-04-01T16:21:55.343+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c21b') } 2015-04-01T16:21:55.343+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c21c') } 2015-04-01T16:21:55.343+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c21d') } 2015-04-01T16:21:55.343+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c21e') } 2015-04-01T16:21:55.344+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c21f') } 2015-04-01T16:21:55.344+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c220') } 2015-04-01T16:21:55.344+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c221') } 2015-04-01T16:21:55.344+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c222') } 2015-04-01T16:21:55.344+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c223') } 2015-04-01T16:21:55.344+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c224') } 2015-04-01T16:21:55.344+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c225') } 2015-04-01T16:21:55.345+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c226') } 2015-04-01T16:21:55.345+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c227') } 2015-04-01T16:21:55.345+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c228') } 2015-04-01T16:21:55.345+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c229') } 2015-04-01T16:21:55.345+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c22a') } 2015-04-01T16:21:55.345+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c22b') } 2015-04-01T16:21:55.345+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c22c') } 2015-04-01T16:21:55.346+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c22d') } 2015-04-01T16:21:55.346+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c22e') } 2015-04-01T16:21:55.346+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c22f') } 2015-04-01T16:21:55.346+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c230') } 2015-04-01T16:21:55.346+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c231') } 2015-04-01T16:21:55.346+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c232') } 2015-04-01T16:21:55.346+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c233') } 2015-04-01T16:21:55.347+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c234') } 2015-04-01T16:21:55.347+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c235') } 2015-04-01T16:21:55.347+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c236') } 2015-04-01T16:21:55.347+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c237') } 2015-04-01T16:21:55.347+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c238') } 2015-04-01T16:21:55.347+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c239') } 2015-04-01T16:21:55.347+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c23a') } 2015-04-01T16:21:55.348+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c23b') } 2015-04-01T16:21:55.348+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c23c') } 2015-04-01T16:21:55.348+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c23d') } 2015-04-01T16:21:55.348+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c23e') } 2015-04-01T16:21:55.348+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c23f') } 2015-04-01T16:21:55.348+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c240') } 2015-04-01T16:21:55.348+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c241') } 2015-04-01T16:21:55.349+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c242') } 2015-04-01T16:21:55.349+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c243') } 2015-04-01T16:21:55.349+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c244') } 2015-04-01T16:21:55.349+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c245') } 2015-04-01T16:21:55.349+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c246') } 2015-04-01T16:21:55.349+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c247') } 2015-04-01T16:21:55.349+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c248') } 2015-04-01T16:21:55.350+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c249') } 2015-04-01T16:21:55.350+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c24a') } 2015-04-01T16:21:55.350+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c24b') } 2015-04-01T16:21:55.350+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c24c') } 2015-04-01T16:21:55.350+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c24d') } 2015-04-01T16:21:55.350+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c24e') } 2015-04-01T16:21:55.350+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c24f') } 2015-04-01T16:21:55.351+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c250') } 2015-04-01T16:21:55.351+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c251') } 2015-04-01T16:21:55.351+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c252') } 2015-04-01T16:21:55.351+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c253') } 2015-04-01T16:21:55.351+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c254') } 2015-04-01T16:21:55.351+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c255') } 2015-04-01T16:21:55.351+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c256') } 2015-04-01T16:21:55.352+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c257') } 2015-04-01T16:21:55.352+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c258') } 2015-04-01T16:21:55.352+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c259') } 2015-04-01T16:21:55.352+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c25a') } 2015-04-01T16:21:55.352+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c25b') } 2015-04-01T16:21:55.352+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c25c') } 2015-04-01T16:21:55.352+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c25d') } 2015-04-01T16:21:55.353+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c25e') } 2015-04-01T16:21:55.353+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c25f') } 2015-04-01T16:21:55.353+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c260') } 2015-04-01T16:21:55.353+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c261') } 2015-04-01T16:21:55.353+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c262') } 2015-04-01T16:21:55.353+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:55.353+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 2:32c6000 2015-04-01T16:21:55.353+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c263') } 2015-04-01T16:21:55.354+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c264') } 2015-04-01T16:21:55.354+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c265') } 2015-04-01T16:21:55.354+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c266') } 2015-04-01T16:21:55.354+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c267') } 2015-04-01T16:21:55.354+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c268') } 2015-04-01T16:21:55.354+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c269') } 2015-04-01T16:21:55.354+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c26a') } 2015-04-01T16:21:55.355+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c26b') } 2015-04-01T16:21:55.355+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c26c') } 2015-04-01T16:21:55.355+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c26d') } 2015-04-01T16:21:55.355+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c26e') } 2015-04-01T16:21:55.355+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c26f') } 2015-04-01T16:21:55.355+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c270') } 2015-04-01T16:21:55.355+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c271') } 2015-04-01T16:21:55.356+0000 D REPL [rsBackgroundSync] bgsync buffer has 554 bytes 2015-04-01T16:21:55.356+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c272') } 2015-04-01T16:21:55.356+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c273') } 2015-04-01T16:21:55.356+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c274') } 2015-04-01T16:21:55.356+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c275') } 2015-04-01T16:21:55.356+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c276') } 2015-04-01T16:21:55.357+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c277') } 2015-04-01T16:21:55.357+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c278') } 2015-04-01T16:21:55.357+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c279') } 2015-04-01T16:21:55.357+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c27a') } 2015-04-01T16:21:55.357+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c27b') } 2015-04-01T16:21:55.357+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c27c') } 2015-04-01T16:21:55.357+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c27d') } 2015-04-01T16:21:55.358+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c27e') } 2015-04-01T16:21:55.358+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c27f') } 2015-04-01T16:21:55.358+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c280') } 2015-04-01T16:21:55.358+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c281') } 2015-04-01T16:21:55.358+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c282') } 2015-04-01T16:21:55.358+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c283') } 2015-04-01T16:21:55.358+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c284') } 2015-04-01T16:21:55.359+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c285') } 2015-04-01T16:21:55.359+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c286') } 2015-04-01T16:21:55.359+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c287') } 2015-04-01T16:21:55.359+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c288') } 2015-04-01T16:21:55.359+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c289') } 2015-04-01T16:21:55.359+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c28a') } 2015-04-01T16:21:55.359+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c28b') } 2015-04-01T16:21:55.360+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c28c') } 2015-04-01T16:21:55.360+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c28d') } 2015-04-01T16:21:55.360+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c28e') } 2015-04-01T16:21:55.360+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c28f') } 2015-04-01T16:21:55.360+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c290') } 2015-04-01T16:21:55.360+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c291') } 2015-04-01T16:21:55.360+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c292') } 2015-04-01T16:21:55.361+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c293') } 2015-04-01T16:21:55.361+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c294') } 2015-04-01T16:21:55.361+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c295') } 2015-04-01T16:21:55.361+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c296') } 2015-04-01T16:21:55.361+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c297') } 2015-04-01T16:21:55.361+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c298') } 2015-04-01T16:21:55.361+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c299') } 2015-04-01T16:21:55.362+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c29a') } 2015-04-01T16:21:55.362+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c29b') } 2015-04-01T16:21:55.362+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c29c') } 2015-04-01T16:21:55.362+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c29d') } 2015-04-01T16:21:55.362+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c29e') } 2015-04-01T16:21:55.362+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c29f') } 2015-04-01T16:21:55.362+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2a0') } 2015-04-01T16:21:55.363+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2a1') } 2015-04-01T16:21:55.363+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2a2') } 2015-04-01T16:21:55.363+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2a3') } 2015-04-01T16:21:55.363+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2a4') } 2015-04-01T16:21:55.363+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2a5') } 2015-04-01T16:21:55.363+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2a6') } 2015-04-01T16:21:55.363+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2a7') } 2015-04-01T16:21:55.364+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2a8') } 2015-04-01T16:21:55.364+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2a9') } 2015-04-01T16:21:55.364+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2aa') } 2015-04-01T16:21:55.364+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2ab') } 2015-04-01T16:21:55.364+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2ac') } 2015-04-01T16:21:55.364+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2ad') } 2015-04-01T16:21:55.364+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2ae') } 2015-04-01T16:21:55.365+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2af') } 2015-04-01T16:21:55.365+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2b0') } 2015-04-01T16:21:55.365+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2b1') } 2015-04-01T16:21:55.365+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2b2') } 2015-04-01T16:21:55.365+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2b3') } 2015-04-01T16:21:55.365+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2b4') } 2015-04-01T16:21:55.365+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2b5') } 2015-04-01T16:21:55.366+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2b6') } 2015-04-01T16:21:55.366+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2b7') } 2015-04-01T16:21:55.366+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2b8') } 2015-04-01T16:21:55.366+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2b9') } 2015-04-01T16:21:55.366+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2ba') } 2015-04-01T16:21:55.366+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2bb') } 2015-04-01T16:21:55.366+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2bc') } 2015-04-01T16:21:55.367+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2bd') } 2015-04-01T16:21:55.367+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2be') } 2015-04-01T16:21:55.367+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2bf') } 2015-04-01T16:21:55.367+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2c0') } 2015-04-01T16:21:55.367+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2c1') } 2015-04-01T16:21:55.367+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2c2') } 2015-04-01T16:21:55.367+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2c3') } 2015-04-01T16:21:55.368+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2c4') } 2015-04-01T16:21:55.368+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2c5') } 2015-04-01T16:21:55.368+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2c6') } 2015-04-01T16:21:55.368+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2c7') } 2015-04-01T16:21:55.368+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2c8') } 2015-04-01T16:21:55.368+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2c9') } 2015-04-01T16:21:55.368+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2ca') } 2015-04-01T16:21:55.369+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2cb') } 2015-04-01T16:21:55.369+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2cc') } 2015-04-01T16:21:55.369+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2cd') } 2015-04-01T16:21:55.369+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2ce') } 2015-04-01T16:21:55.369+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2cf') } 2015-04-01T16:21:55.369+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2d0') } 2015-04-01T16:21:55.369+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2d1') } 2015-04-01T16:21:55.370+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2d2') } 2015-04-01T16:21:55.370+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2d3') } 2015-04-01T16:21:55.370+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2d4') } 2015-04-01T16:21:55.370+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2d5') } 2015-04-01T16:21:55.370+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2d6') } 2015-04-01T16:21:55.370+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2d7') } 2015-04-01T16:21:55.370+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2d8') } 2015-04-01T16:21:55.371+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2d9') } 2015-04-01T16:21:55.371+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2da') } 2015-04-01T16:21:55.371+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2db') } 2015-04-01T16:21:55.371+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2dc') } 2015-04-01T16:21:55.371+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2dd') } 2015-04-01T16:21:55.371+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2de') } 2015-04-01T16:21:55.371+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2df') } 2015-04-01T16:21:55.372+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2e0') } 2015-04-01T16:21:55.372+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2e1') } 2015-04-01T16:21:55.372+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2e2') } 2015-04-01T16:21:55.372+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2e3') } 2015-04-01T16:21:55.372+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2e4') } 2015-04-01T16:21:55.372+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2e5') } 2015-04-01T16:21:55.372+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2e6') } 2015-04-01T16:21:55.373+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2e7') } 2015-04-01T16:21:55.373+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2e8') } 2015-04-01T16:21:55.373+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2e9') } 2015-04-01T16:21:55.373+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2ea') } 2015-04-01T16:21:55.373+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2eb') } 2015-04-01T16:21:55.373+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2ec') } 2015-04-01T16:21:55.373+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2ed') } 2015-04-01T16:21:55.374+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2ee') } 2015-04-01T16:21:55.374+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2ef') } 2015-04-01T16:21:55.374+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2f0') } 2015-04-01T16:21:55.374+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2f1') } 2015-04-01T16:21:55.374+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2f2') } 2015-04-01T16:21:55.374+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2f3') } 2015-04-01T16:21:55.374+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2f4') } 2015-04-01T16:21:55.375+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2f5') } 2015-04-01T16:21:55.375+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2f6') } 2015-04-01T16:21:55.375+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2f7') } 2015-04-01T16:21:55.375+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2f8') } 2015-04-01T16:21:55.375+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2f9') } 2015-04-01T16:21:55.375+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2fa') } 2015-04-01T16:21:55.375+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2fb') } 2015-04-01T16:21:55.376+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2fc') } 2015-04-01T16:21:55.376+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2fd') } 2015-04-01T16:21:55.376+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2fe') } 2015-04-01T16:21:55.376+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c2ff') } 2015-04-01T16:21:55.376+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c300') } 2015-04-01T16:21:55.376+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c301') } 2015-04-01T16:21:55.376+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c302') } 2015-04-01T16:21:55.377+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c303') } 2015-04-01T16:21:55.377+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c304') } 2015-04-01T16:21:55.377+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c305') } 2015-04-01T16:21:55.377+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c306') } 2015-04-01T16:21:55.377+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c307') } 2015-04-01T16:21:55.377+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c308') } 2015-04-01T16:21:55.377+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c309') } 2015-04-01T16:21:55.378+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c30a') } 2015-04-01T16:21:55.378+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c30b') } 2015-04-01T16:21:55.378+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c30c') } 2015-04-01T16:21:55.378+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c30d') } 2015-04-01T16:21:55.378+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c30e') } 2015-04-01T16:21:55.378+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c30f') } 2015-04-01T16:21:55.378+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c310') } 2015-04-01T16:21:55.379+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c311') } 2015-04-01T16:21:55.379+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c312') } 2015-04-01T16:21:55.379+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c313') } 2015-04-01T16:21:55.379+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c314') } 2015-04-01T16:21:55.379+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c315') } 2015-04-01T16:21:55.379+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c316') } 2015-04-01T16:21:55.379+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c317') } 2015-04-01T16:21:55.380+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c318') } 2015-04-01T16:21:55.380+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c319') } 2015-04-01T16:21:55.380+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c31a') } 2015-04-01T16:21:55.380+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c31b') } 2015-04-01T16:21:55.380+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c31c') } 2015-04-01T16:21:55.380+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c31d') } 2015-04-01T16:21:55.380+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c31e') } 2015-04-01T16:21:55.381+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c31f') } 2015-04-01T16:21:55.381+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c320') } 2015-04-01T16:21:55.381+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c321') } 2015-04-01T16:21:55.381+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c322') } 2015-04-01T16:21:55.381+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c323') } 2015-04-01T16:21:55.381+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c324') } 2015-04-01T16:21:55.381+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c325') } 2015-04-01T16:21:55.382+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c326') } 2015-04-01T16:21:55.382+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c327') } 2015-04-01T16:21:55.382+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c328') } 2015-04-01T16:21:55.382+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c329') } 2015-04-01T16:21:55.382+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c32a') } 2015-04-01T16:21:55.382+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c32b') } 2015-04-01T16:21:55.382+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c32c') } 2015-04-01T16:21:55.383+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c32d') } 2015-04-01T16:21:55.383+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c32e') } 2015-04-01T16:21:55.383+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c32f') } 2015-04-01T16:21:55.383+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c330') } 2015-04-01T16:21:55.383+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c331') } 2015-04-01T16:21:55.383+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c332') } 2015-04-01T16:21:55.383+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c333') } 2015-04-01T16:21:55.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c334') } 2015-04-01T16:21:55.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c335') } 2015-04-01T16:21:55.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c336') } 2015-04-01T16:21:55.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c337') } 2015-04-01T16:21:55.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c338') } 2015-04-01T16:21:55.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c339') } 2015-04-01T16:21:55.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c33a') } 2015-04-01T16:21:55.385+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c33b') } 2015-04-01T16:21:55.385+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c33c') } 2015-04-01T16:21:55.385+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c33d') } 2015-04-01T16:21:55.385+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c33e') } 2015-04-01T16:21:55.385+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c33f') } 2015-04-01T16:21:55.385+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c340') } 2015-04-01T16:21:55.385+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c341') } 2015-04-01T16:21:55.385+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c342') } 2015-04-01T16:21:55.386+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c343') } 2015-04-01T16:21:55.386+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c344') } 2015-04-01T16:21:55.386+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c345') } 2015-04-01T16:21:55.386+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c346') } 2015-04-01T16:21:55.386+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c347') } 2015-04-01T16:21:55.386+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c348') } 2015-04-01T16:21:55.386+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c349') } 2015-04-01T16:21:55.387+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c34a') } 2015-04-01T16:21:55.387+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c34b') } 2015-04-01T16:21:55.387+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c34c') } 2015-04-01T16:21:55.387+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c34d') } 2015-04-01T16:21:55.387+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c34e') } 2015-04-01T16:21:55.387+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c34f') } 2015-04-01T16:21:55.387+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c350') } 2015-04-01T16:21:55.388+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c351') } 2015-04-01T16:21:55.388+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c352') } 2015-04-01T16:21:55.388+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c353') } 2015-04-01T16:21:55.388+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c354') } 2015-04-01T16:21:55.388+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c355') } 2015-04-01T16:21:55.388+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c356') } 2015-04-01T16:21:55.388+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c357') } 2015-04-01T16:21:55.389+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c358') } 2015-04-01T16:21:55.389+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c359') } 2015-04-01T16:21:55.389+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c35a') } 2015-04-01T16:21:55.389+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c35b') } 2015-04-01T16:21:55.389+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c35c') } 2015-04-01T16:21:55.389+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c35d') } 2015-04-01T16:21:55.389+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c35e') } 2015-04-01T16:21:55.390+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c35f') } 2015-04-01T16:21:55.390+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c360') } 2015-04-01T16:21:55.390+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c361') } 2015-04-01T16:21:55.390+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c362') } 2015-04-01T16:21:55.390+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c363') } 2015-04-01T16:21:55.390+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c364') } 2015-04-01T16:21:55.390+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c365') } 2015-04-01T16:21:55.391+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c366') } 2015-04-01T16:21:55.391+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c367') } 2015-04-01T16:21:55.391+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c368') } 2015-04-01T16:21:55.391+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c369') } 2015-04-01T16:21:55.391+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c36a') } 2015-04-01T16:21:55.391+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c36b') } 2015-04-01T16:21:55.391+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c36c') } 2015-04-01T16:21:55.392+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c36d') } 2015-04-01T16:21:55.392+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c36e') } 2015-04-01T16:21:55.392+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c36f') } 2015-04-01T16:21:55.392+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c370') } 2015-04-01T16:21:55.392+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c371') } 2015-04-01T16:21:55.392+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c372') } 2015-04-01T16:21:55.392+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c373') } 2015-04-01T16:21:55.393+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c374') } 2015-04-01T16:21:55.393+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c375') } 2015-04-01T16:21:55.393+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c376') } 2015-04-01T16:21:55.393+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c377') } 2015-04-01T16:21:55.393+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c378') } 2015-04-01T16:21:55.393+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c379') } 2015-04-01T16:21:55.393+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c37a') } 2015-04-01T16:21:55.394+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c37b') } 2015-04-01T16:21:55.394+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c37c') } 2015-04-01T16:21:55.394+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c37d') } 2015-04-01T16:21:55.394+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c37e') } 2015-04-01T16:21:55.394+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c37f') } 2015-04-01T16:21:55.394+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c380') } 2015-04-01T16:21:55.394+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c381') } 2015-04-01T16:21:55.395+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c382') } 2015-04-01T16:21:55.395+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c383') } 2015-04-01T16:21:55.395+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c384') } 2015-04-01T16:21:55.395+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c385') } 2015-04-01T16:21:55.395+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c386') } 2015-04-01T16:21:55.395+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c387') } 2015-04-01T16:21:55.395+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c388') } 2015-04-01T16:21:55.396+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c389') } 2015-04-01T16:21:55.396+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c38a') } 2015-04-01T16:21:55.396+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c38b') } 2015-04-01T16:21:55.396+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c38c') } 2015-04-01T16:21:55.396+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c38d') } 2015-04-01T16:21:55.396+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c38e') } 2015-04-01T16:21:55.396+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c38f') } 2015-04-01T16:21:55.397+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c390') } 2015-04-01T16:21:55.397+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c391') } 2015-04-01T16:21:55.397+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c392') } 2015-04-01T16:21:55.397+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c393') } 2015-04-01T16:21:55.397+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c394') } 2015-04-01T16:21:55.397+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c395') } 2015-04-01T16:21:55.397+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c396') } 2015-04-01T16:21:55.398+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c397') } 2015-04-01T16:21:55.398+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c398') } 2015-04-01T16:21:55.398+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c399') } 2015-04-01T16:21:55.398+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c39a') } 2015-04-01T16:21:55.398+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c39b') } 2015-04-01T16:21:55.398+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c39c') } 2015-04-01T16:21:55.398+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c39d') } 2015-04-01T16:21:55.399+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c39e') } 2015-04-01T16:21:55.399+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c39f') } 2015-04-01T16:21:55.399+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3a0') } 2015-04-01T16:21:55.399+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3a1') } 2015-04-01T16:21:55.399+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3a2') } 2015-04-01T16:21:55.399+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3a3') } 2015-04-01T16:21:55.399+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3a4') } 2015-04-01T16:21:55.400+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3a5') } 2015-04-01T16:21:55.400+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3a6') } 2015-04-01T16:21:55.400+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3a7') } 2015-04-01T16:21:55.400+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3a8') } 2015-04-01T16:21:55.400+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3a9') } 2015-04-01T16:21:55.400+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3aa') } 2015-04-01T16:21:55.400+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3ab') } 2015-04-01T16:21:55.401+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3ac') } 2015-04-01T16:21:55.401+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3ad') } 2015-04-01T16:21:55.401+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3ae') } 2015-04-01T16:21:55.401+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3af') } 2015-04-01T16:21:55.401+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3b0') } 2015-04-01T16:21:55.401+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3b1') } 2015-04-01T16:21:55.401+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3b2') } 2015-04-01T16:21:55.402+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3b3') } 2015-04-01T16:21:55.402+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3b4') } 2015-04-01T16:21:55.402+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3b5') } 2015-04-01T16:21:55.402+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3b6') } 2015-04-01T16:21:55.402+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3b7') } 2015-04-01T16:21:55.402+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3b8') } 2015-04-01T16:21:55.402+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3b9') } 2015-04-01T16:21:55.403+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3ba') } 2015-04-01T16:21:55.403+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3bb') } 2015-04-01T16:21:55.403+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3bc') } 2015-04-01T16:21:55.403+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3bd') } 2015-04-01T16:21:55.403+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3be') } 2015-04-01T16:21:55.403+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3bf') } 2015-04-01T16:21:55.403+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3c0') } 2015-04-01T16:21:55.404+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3c1') } 2015-04-01T16:21:55.404+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3c2') } 2015-04-01T16:21:55.404+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3c3') } 2015-04-01T16:21:55.404+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3c4') } 2015-04-01T16:21:55.404+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3c5') } 2015-04-01T16:21:55.404+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3c6') } 2015-04-01T16:21:55.404+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3c7') } 2015-04-01T16:21:55.405+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3c8') } 2015-04-01T16:21:55.405+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3c9') } 2015-04-01T16:21:55.405+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3ca') } 2015-04-01T16:21:55.405+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3cb') } 2015-04-01T16:21:55.405+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3cc') } 2015-04-01T16:21:55.405+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3cd') } 2015-04-01T16:21:55.405+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3ce') } 2015-04-01T16:21:55.406+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3cf') } 2015-04-01T16:21:55.406+0000 D QUERY [repl writer worker 15] Tests04011621.testcollection: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:55.407+0000 D QUERY [rsSync] local.oplog.rs: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:55.409+0000 D REPL [rsBackgroundSync] bgsync buffer has 2264 bytes 2015-04-01T16:21:55.409+0000 D REPL [rsBackgroundSync] bgsync buffer has 3974 bytes 2015-04-01T16:21:55.409+0000 D REPL [rsBackgroundSync] bgsync buffer has 5684 bytes 2015-04-01T16:21:55.409+0000 D REPL [rsBackgroundSync] bgsync buffer has 7394 bytes 2015-04-01T16:21:55.409+0000 D REPL [rsBackgroundSync] bgsync buffer has 9104 bytes 2015-04-01T16:21:55.409+0000 D REPL [rsBackgroundSync] bgsync buffer has 10814 bytes 2015-04-01T16:21:55.409+0000 D REPL [rsBackgroundSync] bgsync buffer has 12524 bytes 2015-04-01T16:21:55.409+0000 D REPL [rsBackgroundSync] bgsync buffer has 14234 bytes 2015-04-01T16:21:55.409+0000 D REPL [rsBackgroundSync] bgsync buffer has 15944 bytes 2015-04-01T16:21:55.409+0000 D REPL [rsBackgroundSync] bgsync buffer has 17654 bytes 2015-04-01T16:21:55.409+0000 D REPL [rsBackgroundSync] bgsync buffer has 19364 bytes 2015-04-01T16:21:55.410+0000 D REPL [rsBackgroundSync] bgsync buffer has 21074 bytes 2015-04-01T16:21:55.410+0000 D REPL [rsBackgroundSync] bgsync buffer has 22784 bytes 2015-04-01T16:21:55.410+0000 D REPL [rsBackgroundSync] bgsync buffer has 24494 bytes 2015-04-01T16:21:55.410+0000 D REPL [rsBackgroundSync] bgsync buffer has 26204 bytes 2015-04-01T16:21:55.410+0000 D REPL [rsBackgroundSync] bgsync buffer has 27914 bytes 2015-04-01T16:21:55.410+0000 D REPL [rsBackgroundSync] bgsync buffer has 29624 bytes 2015-04-01T16:21:55.410+0000 D REPL [rsBackgroundSync] bgsync buffer has 31334 bytes 2015-04-01T16:21:55.410+0000 D REPL [rsBackgroundSync] bgsync buffer has 33044 bytes 2015-04-01T16:21:55.410+0000 D REPL [rsBackgroundSync] bgsync buffer has 34754 bytes 2015-04-01T16:21:55.410+0000 D REPL [rsBackgroundSync] bgsync buffer has 36464 bytes 2015-04-01T16:21:55.410+0000 D REPL [rsBackgroundSync] bgsync buffer has 38174 bytes 2015-04-01T16:21:55.411+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.411+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|1003, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.412+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:55.413+0000 D REPL [rsBackgroundSync] bgsync buffer has 39779 bytes 2015-04-01T16:21:55.413+0000 D REPL [rsBackgroundSync] bgsync buffer has 41489 bytes 2015-04-01T16:21:55.414+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:55.415+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:55.415+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:55.415+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:55.415+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.415+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:55.433+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|1004, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.434+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.435+0000 D REPL [rsBackgroundSync] bgsync buffer has 43092 bytes 2015-04-01T16:21:55.435+0000 D REPL [rsBackgroundSync] bgsync buffer has 44802 bytes 2015-04-01T16:21:55.435+0000 D REPL [rsBackgroundSync] bgsync buffer has 46512 bytes 2015-04-01T16:21:55.435+0000 D REPL [rsBackgroundSync] bgsync buffer has 48222 bytes 2015-04-01T16:21:55.435+0000 D REPL [rsBackgroundSync] bgsync buffer has 49932 bytes 2015-04-01T16:21:55.435+0000 D REPL [rsBackgroundSync] bgsync buffer has 51642 bytes 2015-04-01T16:21:55.435+0000 D REPL [rsBackgroundSync] bgsync buffer has 53352 bytes 2015-04-01T16:21:55.436+0000 D REPL [rsBackgroundSync] bgsync buffer has 55062 bytes 2015-04-01T16:21:55.436+0000 D REPL [rsBackgroundSync] bgsync buffer has 56772 bytes 2015-04-01T16:21:55.436+0000 D REPL [rsBackgroundSync] bgsync buffer has 58482 bytes 2015-04-01T16:21:55.436+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:55.471+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:55.471+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:55.471+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:55.472+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.472+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:55.472+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 2:32c6000 2015-04-01T16:21:55.472+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.474+0000 D REPL [rsBackgroundSync] bgsync buffer has 60192 bytes 2015-04-01T16:21:55.474+0000 D REPL [rsBackgroundSync] bgsync buffer has 61902 bytes 2015-04-01T16:21:55.474+0000 D REPL [rsBackgroundSync] bgsync buffer has 63612 bytes 2015-04-01T16:21:55.474+0000 D REPL [rsBackgroundSync] bgsync buffer has 65322 bytes 2015-04-01T16:21:55.474+0000 D REPL [rsBackgroundSync] bgsync buffer has 67032 bytes 2015-04-01T16:21:55.474+0000 D REPL [rsBackgroundSync] bgsync buffer has 68742 bytes 2015-04-01T16:21:55.474+0000 D REPL [rsBackgroundSync] bgsync buffer has 70452 bytes 2015-04-01T16:21:55.474+0000 D REPL [rsBackgroundSync] bgsync buffer has 72162 bytes 2015-04-01T16:21:55.474+0000 D REPL [rsBackgroundSync] bgsync buffer has 73872 bytes 2015-04-01T16:21:55.474+0000 D REPL [rsBackgroundSync] bgsync buffer has 75582 bytes 2015-04-01T16:21:55.474+0000 D REPL [rsBackgroundSync] bgsync buffer has 77292 bytes 2015-04-01T16:21:55.474+0000 D REPL [rsBackgroundSync] bgsync buffer has 79002 bytes 2015-04-01T16:21:55.474+0000 D REPL [rsBackgroundSync] bgsync buffer has 80712 bytes 2015-04-01T16:21:55.474+0000 D REPL [rsBackgroundSync] bgsync buffer has 82422 bytes 2015-04-01T16:21:55.474+0000 D REPL [rsBackgroundSync] bgsync buffer has 84132 bytes 2015-04-01T16:21:55.475+0000 D REPL [rsBackgroundSync] bgsync buffer has 85842 bytes 2015-04-01T16:21:55.481+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.482+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|1005, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.516+0000 D REPL [rsSync] replication batch size is 763 2015-04-01T16:21:55.516+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3d0') } 2015-04-01T16:21:55.516+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3d1') } 2015-04-01T16:21:55.516+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3d2') } 2015-04-01T16:21:55.517+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3d3') } 2015-04-01T16:21:55.517+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3d4') } 2015-04-01T16:21:55.517+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3d5') } 2015-04-01T16:21:55.517+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3d6') } 2015-04-01T16:21:55.517+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3d7') } 2015-04-01T16:21:55.517+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3d8') } 2015-04-01T16:21:55.517+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3d9') } 2015-04-01T16:21:55.518+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3da') } 2015-04-01T16:21:55.518+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3db') } 2015-04-01T16:21:55.518+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3dc') } 2015-04-01T16:21:55.518+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3dd') } 2015-04-01T16:21:55.518+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3de') } 2015-04-01T16:21:55.518+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3df') } 2015-04-01T16:21:55.518+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3e0') } 2015-04-01T16:21:55.518+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3e1') } 2015-04-01T16:21:55.519+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3e2') } 2015-04-01T16:21:55.519+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3e3') } 2015-04-01T16:21:55.519+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3e4') } 2015-04-01T16:21:55.519+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3e5') } 2015-04-01T16:21:55.519+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3e6') } 2015-04-01T16:21:55.519+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3e7') } 2015-04-01T16:21:55.519+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3e8') } 2015-04-01T16:21:55.520+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3e9') } 2015-04-01T16:21:55.520+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3ea') } 2015-04-01T16:21:55.520+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3eb') } 2015-04-01T16:21:55.520+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3ec') } 2015-04-01T16:21:55.520+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3ed') } 2015-04-01T16:21:55.520+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3ee') } 2015-04-01T16:21:55.520+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3ef') } 2015-04-01T16:21:55.520+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3f0') } 2015-04-01T16:21:55.521+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3f1') } 2015-04-01T16:21:55.521+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3f2') } 2015-04-01T16:21:55.521+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3f3') } 2015-04-01T16:21:55.521+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3f4') } 2015-04-01T16:21:55.521+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3f5') } 2015-04-01T16:21:55.521+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3f6') } 2015-04-01T16:21:55.521+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3f7') } 2015-04-01T16:21:55.522+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3f8') } 2015-04-01T16:21:55.522+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3f9') } 2015-04-01T16:21:55.522+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3fa') } 2015-04-01T16:21:55.522+0000 D REPL [rsBackgroundSync] bgsync buffer has 570 bytes 2015-04-01T16:21:55.522+0000 D REPL [rsBackgroundSync] bgsync buffer has 2280 bytes 2015-04-01T16:21:55.522+0000 D REPL [rsBackgroundSync] bgsync buffer has 3990 bytes 2015-04-01T16:21:55.522+0000 D REPL [rsBackgroundSync] bgsync buffer has 5700 bytes 2015-04-01T16:21:55.522+0000 D REPL [rsBackgroundSync] bgsync buffer has 7410 bytes 2015-04-01T16:21:55.522+0000 D REPL [rsBackgroundSync] bgsync buffer has 9120 bytes 2015-04-01T16:21:55.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 10830 bytes 2015-04-01T16:21:55.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 12540 bytes 2015-04-01T16:21:55.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 14250 bytes 2015-04-01T16:21:55.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 15960 bytes 2015-04-01T16:21:55.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 17670 bytes 2015-04-01T16:21:55.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 19380 bytes 2015-04-01T16:21:55.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 21090 bytes 2015-04-01T16:21:55.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 22800 bytes 2015-04-01T16:21:55.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 24510 bytes 2015-04-01T16:21:55.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 26220 bytes 2015-04-01T16:21:55.522+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3fb') } 2015-04-01T16:21:55.523+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3fc') } 2015-04-01T16:21:55.523+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3fd') } 2015-04-01T16:21:55.523+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3fe') } 2015-04-01T16:21:55.524+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c3ff') } 2015-04-01T16:21:55.524+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c400') } 2015-04-01T16:21:55.524+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c401') } 2015-04-01T16:21:55.524+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c402') } 2015-04-01T16:21:55.524+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c403') } 2015-04-01T16:21:55.524+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c404') } 2015-04-01T16:21:55.524+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c405') } 2015-04-01T16:21:55.524+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c406') } 2015-04-01T16:21:55.525+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c407') } 2015-04-01T16:21:55.525+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c408') } 2015-04-01T16:21:55.525+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c409') } 2015-04-01T16:21:55.525+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c40a') } 2015-04-01T16:21:55.525+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c40b') } 2015-04-01T16:21:55.525+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c40c') } 2015-04-01T16:21:55.525+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c40d') } 2015-04-01T16:21:55.526+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c40e') } 2015-04-01T16:21:55.526+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c40f') } 2015-04-01T16:21:55.526+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c410') } 2015-04-01T16:21:55.526+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c411') } 2015-04-01T16:21:55.526+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c412') } 2015-04-01T16:21:55.526+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c413') } 2015-04-01T16:21:55.526+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c414') } 2015-04-01T16:21:55.526+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c415') } 2015-04-01T16:21:55.527+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c416') } 2015-04-01T16:21:55.527+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c417') } 2015-04-01T16:21:55.527+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c418') } 2015-04-01T16:21:55.527+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c419') } 2015-04-01T16:21:55.527+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c41a') } 2015-04-01T16:21:55.527+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c41b') } 2015-04-01T16:21:55.527+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c41c') } 2015-04-01T16:21:55.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c41d') } 2015-04-01T16:21:55.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c41e') } 2015-04-01T16:21:55.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c41f') } 2015-04-01T16:21:55.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c420') } 2015-04-01T16:21:55.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c421') } 2015-04-01T16:21:55.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c422') } 2015-04-01T16:21:55.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c423') } 2015-04-01T16:21:55.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c424') } 2015-04-01T16:21:55.529+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c425') } 2015-04-01T16:21:55.529+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c426') } 2015-04-01T16:21:55.529+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c427') } 2015-04-01T16:21:55.529+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c428') } 2015-04-01T16:21:55.529+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c429') } 2015-04-01T16:21:55.529+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c42a') } 2015-04-01T16:21:55.529+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c42b') } 2015-04-01T16:21:55.530+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c42c') } 2015-04-01T16:21:55.530+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c42d') } 2015-04-01T16:21:55.530+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c42e') } 2015-04-01T16:21:55.530+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c42f') } 2015-04-01T16:21:55.530+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c430') } 2015-04-01T16:21:55.530+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c431') } 2015-04-01T16:21:55.530+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c432') } 2015-04-01T16:21:55.530+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c433') } 2015-04-01T16:21:55.531+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c434') } 2015-04-01T16:21:55.531+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c435') } 2015-04-01T16:21:55.531+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c436') } 2015-04-01T16:21:55.531+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c437') } 2015-04-01T16:21:55.531+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c438') } 2015-04-01T16:21:55.531+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c439') } 2015-04-01T16:21:55.531+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c43a') } 2015-04-01T16:21:55.531+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c43b') } 2015-04-01T16:21:55.532+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c43c') } 2015-04-01T16:21:55.532+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c43d') } 2015-04-01T16:21:55.532+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c43e') } 2015-04-01T16:21:55.532+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c43f') } 2015-04-01T16:21:55.532+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c440') } 2015-04-01T16:21:55.532+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c441') } 2015-04-01T16:21:55.532+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c442') } 2015-04-01T16:21:55.533+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c443') } 2015-04-01T16:21:55.533+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c444') } 2015-04-01T16:21:55.533+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c445') } 2015-04-01T16:21:55.533+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c446') } 2015-04-01T16:21:55.533+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c447') } 2015-04-01T16:21:55.533+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c448') } 2015-04-01T16:21:55.533+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c449') } 2015-04-01T16:21:55.533+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c44a') } 2015-04-01T16:21:55.534+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c44b') } 2015-04-01T16:21:55.534+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c44c') } 2015-04-01T16:21:55.534+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c44d') } 2015-04-01T16:21:55.534+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:55.534+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:32768 fromFreeList: 1 eloc: 3:1648a000 2015-04-01T16:21:55.534+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c44e') } 2015-04-01T16:21:55.534+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c44f') } 2015-04-01T16:21:55.534+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c450') } 2015-04-01T16:21:55.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c451') } 2015-04-01T16:21:55.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c452') } 2015-04-01T16:21:55.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c453') } 2015-04-01T16:21:55.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c454') } 2015-04-01T16:21:55.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c455') } 2015-04-01T16:21:55.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c456') } 2015-04-01T16:21:55.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c457') } 2015-04-01T16:21:55.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c458') } 2015-04-01T16:21:55.536+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c459') } 2015-04-01T16:21:55.536+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c45a') } 2015-04-01T16:21:55.536+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c45b') } 2015-04-01T16:21:55.536+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c45c') } 2015-04-01T16:21:55.536+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c45d') } 2015-04-01T16:21:55.536+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c45e') } 2015-04-01T16:21:55.536+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c45f') } 2015-04-01T16:21:55.537+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c460') } 2015-04-01T16:21:55.537+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c461') } 2015-04-01T16:21:55.537+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c462') } 2015-04-01T16:21:55.537+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c463') } 2015-04-01T16:21:55.537+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c464') } 2015-04-01T16:21:55.537+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c465') } 2015-04-01T16:21:55.537+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c466') } 2015-04-01T16:21:55.537+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c467') } 2015-04-01T16:21:55.538+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c468') } 2015-04-01T16:21:55.538+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c469') } 2015-04-01T16:21:55.538+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c46a') } 2015-04-01T16:21:55.538+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c46b') } 2015-04-01T16:21:55.538+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c46c') } 2015-04-01T16:21:55.538+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c46d') } 2015-04-01T16:21:55.538+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c46e') } 2015-04-01T16:21:55.539+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c46f') } 2015-04-01T16:21:55.539+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c470') } 2015-04-01T16:21:55.539+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c471') } 2015-04-01T16:21:55.539+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c472') } 2015-04-01T16:21:55.539+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c473') } 2015-04-01T16:21:55.539+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c474') } 2015-04-01T16:21:55.539+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c475') } 2015-04-01T16:21:55.539+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c476') } 2015-04-01T16:21:55.540+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c477') } 2015-04-01T16:21:55.540+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c478') } 2015-04-01T16:21:55.540+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c479') } 2015-04-01T16:21:55.540+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c47a') } 2015-04-01T16:21:55.540+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c47b') } 2015-04-01T16:21:55.540+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c47c') } 2015-04-01T16:21:55.540+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c47d') } 2015-04-01T16:21:55.541+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c47e') } 2015-04-01T16:21:55.541+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c47f') } 2015-04-01T16:21:55.541+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c480') } 2015-04-01T16:21:55.541+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c481') } 2015-04-01T16:21:55.541+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c482') } 2015-04-01T16:21:55.541+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c483') } 2015-04-01T16:21:55.541+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c484') } 2015-04-01T16:21:55.541+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c485') } 2015-04-01T16:21:55.542+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c486') } 2015-04-01T16:21:55.542+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c487') } 2015-04-01T16:21:55.542+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c488') } 2015-04-01T16:21:55.542+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c489') } 2015-04-01T16:21:55.542+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c48a') } 2015-04-01T16:21:55.542+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c48b') } 2015-04-01T16:21:55.542+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c48c') } 2015-04-01T16:21:55.543+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c48d') } 2015-04-01T16:21:55.543+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c48e') } 2015-04-01T16:21:55.543+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c48f') } 2015-04-01T16:21:55.543+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c490') } 2015-04-01T16:21:55.543+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c491') } 2015-04-01T16:21:55.543+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c492') } 2015-04-01T16:21:55.543+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c493') } 2015-04-01T16:21:55.543+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c494') } 2015-04-01T16:21:55.544+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c495') } 2015-04-01T16:21:55.544+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c496') } 2015-04-01T16:21:55.544+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c497') } 2015-04-01T16:21:55.544+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c498') } 2015-04-01T16:21:55.544+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c499') } 2015-04-01T16:21:55.544+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c49a') } 2015-04-01T16:21:55.544+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c49b') } 2015-04-01T16:21:55.545+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c49c') } 2015-04-01T16:21:55.545+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c49d') } 2015-04-01T16:21:55.545+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c49e') } 2015-04-01T16:21:55.545+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c49f') } 2015-04-01T16:21:55.545+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4a0') } 2015-04-01T16:21:55.545+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4a1') } 2015-04-01T16:21:55.545+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4a2') } 2015-04-01T16:21:55.545+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4a3') } 2015-04-01T16:21:55.546+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4a4') } 2015-04-01T16:21:55.546+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4a5') } 2015-04-01T16:21:55.546+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4a6') } 2015-04-01T16:21:55.546+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4a7') } 2015-04-01T16:21:55.546+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4a8') } 2015-04-01T16:21:55.546+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4a9') } 2015-04-01T16:21:55.546+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4aa') } 2015-04-01T16:21:55.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4ab') } 2015-04-01T16:21:55.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4ac') } 2015-04-01T16:21:55.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4ad') } 2015-04-01T16:21:55.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4ae') } 2015-04-01T16:21:55.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4af') } 2015-04-01T16:21:55.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4b0') } 2015-04-01T16:21:55.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4b1') } 2015-04-01T16:21:55.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4b2') } 2015-04-01T16:21:55.548+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4b3') } 2015-04-01T16:21:55.548+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4b4') } 2015-04-01T16:21:55.548+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4b5') } 2015-04-01T16:21:55.548+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4b6') } 2015-04-01T16:21:55.548+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4b7') } 2015-04-01T16:21:55.548+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4b8') } 2015-04-01T16:21:55.548+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4b9') } 2015-04-01T16:21:55.549+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4ba') } 2015-04-01T16:21:55.549+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4bb') } 2015-04-01T16:21:55.549+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4bc') } 2015-04-01T16:21:55.549+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4bd') } 2015-04-01T16:21:55.549+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4be') } 2015-04-01T16:21:55.549+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4bf') } 2015-04-01T16:21:55.549+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4c0') } 2015-04-01T16:21:55.549+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4c1') } 2015-04-01T16:21:55.550+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4c2') } 2015-04-01T16:21:55.550+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4c3') } 2015-04-01T16:21:55.550+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4c4') } 2015-04-01T16:21:55.550+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4c5') } 2015-04-01T16:21:55.550+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4c6') } 2015-04-01T16:21:55.550+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4c7') } 2015-04-01T16:21:55.550+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4c8') } 2015-04-01T16:21:55.551+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4c9') } 2015-04-01T16:21:55.551+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4ca') } 2015-04-01T16:21:55.551+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4cb') } 2015-04-01T16:21:55.551+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4cc') } 2015-04-01T16:21:55.551+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4cd') } 2015-04-01T16:21:55.551+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4ce') } 2015-04-01T16:21:55.551+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4cf') } 2015-04-01T16:21:55.552+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4d0') } 2015-04-01T16:21:55.552+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4d1') } 2015-04-01T16:21:55.552+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4d2') } 2015-04-01T16:21:55.552+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4d3') } 2015-04-01T16:21:55.552+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4d4') } 2015-04-01T16:21:55.552+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4d5') } 2015-04-01T16:21:55.552+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4d6') } 2015-04-01T16:21:55.552+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4d7') } 2015-04-01T16:21:55.553+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4d8') } 2015-04-01T16:21:55.553+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4d9') } 2015-04-01T16:21:55.553+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4da') } 2015-04-01T16:21:55.553+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4db') } 2015-04-01T16:21:55.553+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4dc') } 2015-04-01T16:21:55.553+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4dd') } 2015-04-01T16:21:55.553+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4de') } 2015-04-01T16:21:55.554+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4df') } 2015-04-01T16:21:55.554+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4e0') } 2015-04-01T16:21:55.554+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4e1') } 2015-04-01T16:21:55.554+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4e2') } 2015-04-01T16:21:55.554+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4e3') } 2015-04-01T16:21:55.554+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4e4') } 2015-04-01T16:21:55.554+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4e5') } 2015-04-01T16:21:55.554+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4e6') } 2015-04-01T16:21:55.555+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4e7') } 2015-04-01T16:21:55.555+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4e8') } 2015-04-01T16:21:55.555+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4e9') } 2015-04-01T16:21:55.555+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4ea') } 2015-04-01T16:21:55.555+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4eb') } 2015-04-01T16:21:55.555+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4ec') } 2015-04-01T16:21:55.555+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4ed') } 2015-04-01T16:21:55.556+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4ee') } 2015-04-01T16:21:55.556+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4ef') } 2015-04-01T16:21:55.556+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4f0') } 2015-04-01T16:21:55.556+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4f1') } 2015-04-01T16:21:55.556+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4f2') } 2015-04-01T16:21:55.556+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4f3') } 2015-04-01T16:21:55.556+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4f4') } 2015-04-01T16:21:55.557+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4f5') } 2015-04-01T16:21:55.557+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4f6') } 2015-04-01T16:21:55.557+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4f7') } 2015-04-01T16:21:55.557+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4f8') } 2015-04-01T16:21:55.557+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4f9') } 2015-04-01T16:21:55.557+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4fa') } 2015-04-01T16:21:55.557+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4fb') } 2015-04-01T16:21:55.558+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4fc') } 2015-04-01T16:21:55.558+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4fd') } 2015-04-01T16:21:55.558+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4fe') } 2015-04-01T16:21:55.558+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c4ff') } 2015-04-01T16:21:55.558+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c500') } 2015-04-01T16:21:55.558+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c501') } 2015-04-01T16:21:55.558+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c502') } 2015-04-01T16:21:55.558+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c503') } 2015-04-01T16:21:55.559+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c504') } 2015-04-01T16:21:55.559+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c505') } 2015-04-01T16:21:55.559+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c506') } 2015-04-01T16:21:55.559+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c507') } 2015-04-01T16:21:55.559+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c508') } 2015-04-01T16:21:55.559+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c509') } 2015-04-01T16:21:55.559+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c50a') } 2015-04-01T16:21:55.560+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c50b') } 2015-04-01T16:21:55.560+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c50c') } 2015-04-01T16:21:55.560+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c50d') } 2015-04-01T16:21:55.560+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c50e') } 2015-04-01T16:21:55.560+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c50f') } 2015-04-01T16:21:55.560+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c510') } 2015-04-01T16:21:55.560+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c511') } 2015-04-01T16:21:55.560+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c512') } 2015-04-01T16:21:55.561+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c513') } 2015-04-01T16:21:55.561+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c514') } 2015-04-01T16:21:55.561+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c515') } 2015-04-01T16:21:55.561+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c516') } 2015-04-01T16:21:55.561+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c517') } 2015-04-01T16:21:55.561+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c518') } 2015-04-01T16:21:55.561+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c519') } 2015-04-01T16:21:55.562+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c51a') } 2015-04-01T16:21:55.562+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c51b') } 2015-04-01T16:21:55.562+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c51c') } 2015-04-01T16:21:55.562+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c51d') } 2015-04-01T16:21:55.562+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c51e') } 2015-04-01T16:21:55.562+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c51f') } 2015-04-01T16:21:55.562+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c520') } 2015-04-01T16:21:55.563+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c521') } 2015-04-01T16:21:55.563+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c522') } 2015-04-01T16:21:55.563+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c523') } 2015-04-01T16:21:55.563+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c524') } 2015-04-01T16:21:55.563+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c525') } 2015-04-01T16:21:55.563+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c526') } 2015-04-01T16:21:55.563+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c527') } 2015-04-01T16:21:55.564+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c528') } 2015-04-01T16:21:55.564+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c529') } 2015-04-01T16:21:55.564+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c52a') } 2015-04-01T16:21:55.564+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c52b') } 2015-04-01T16:21:55.564+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c52c') } 2015-04-01T16:21:55.564+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c52d') } 2015-04-01T16:21:55.564+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c52e') } 2015-04-01T16:21:55.564+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c52f') } 2015-04-01T16:21:55.565+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c530') } 2015-04-01T16:21:55.565+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c531') } 2015-04-01T16:21:55.565+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c532') } 2015-04-01T16:21:55.565+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c533') } 2015-04-01T16:21:55.565+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c534') } 2015-04-01T16:21:55.565+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c535') } 2015-04-01T16:21:55.565+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c536') } 2015-04-01T16:21:55.566+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c537') } 2015-04-01T16:21:55.566+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c538') } 2015-04-01T16:21:55.566+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c539') } 2015-04-01T16:21:55.566+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c53a') } 2015-04-01T16:21:55.566+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c53b') } 2015-04-01T16:21:55.566+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c53c') } 2015-04-01T16:21:55.566+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c53d') } 2015-04-01T16:21:55.566+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c53e') } 2015-04-01T16:21:55.567+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c53f') } 2015-04-01T16:21:55.567+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c540') } 2015-04-01T16:21:55.567+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c541') } 2015-04-01T16:21:55.567+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c542') } 2015-04-01T16:21:55.567+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c543') } 2015-04-01T16:21:55.567+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c544') } 2015-04-01T16:21:55.567+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c545') } 2015-04-01T16:21:55.568+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c546') } 2015-04-01T16:21:55.568+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c547') } 2015-04-01T16:21:55.568+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c548') } 2015-04-01T16:21:55.568+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c549') } 2015-04-01T16:21:55.568+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c54a') } 2015-04-01T16:21:55.568+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c54b') } 2015-04-01T16:21:55.568+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c54c') } 2015-04-01T16:21:55.568+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c54d') } 2015-04-01T16:21:55.569+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c54e') } 2015-04-01T16:21:55.569+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c54f') } 2015-04-01T16:21:55.569+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c550') } 2015-04-01T16:21:55.569+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c551') } 2015-04-01T16:21:55.569+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c552') } 2015-04-01T16:21:55.569+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c553') } 2015-04-01T16:21:55.569+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c554') } 2015-04-01T16:21:55.570+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c555') } 2015-04-01T16:21:55.570+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c556') } 2015-04-01T16:21:55.570+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c557') } 2015-04-01T16:21:55.570+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c558') } 2015-04-01T16:21:55.570+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c559') } 2015-04-01T16:21:55.570+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c55a') } 2015-04-01T16:21:55.570+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c55b') } 2015-04-01T16:21:55.571+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c55c') } 2015-04-01T16:21:55.571+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c55d') } 2015-04-01T16:21:55.571+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c55e') } 2015-04-01T16:21:55.571+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c55f') } 2015-04-01T16:21:55.571+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c560') } 2015-04-01T16:21:55.571+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c561') } 2015-04-01T16:21:55.571+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c562') } 2015-04-01T16:21:55.571+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c563') } 2015-04-01T16:21:55.572+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c564') } 2015-04-01T16:21:55.572+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c565') } 2015-04-01T16:21:55.572+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c566') } 2015-04-01T16:21:55.572+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c567') } 2015-04-01T16:21:55.572+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c568') } 2015-04-01T16:21:55.572+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c569') } 2015-04-01T16:21:55.572+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c56a') } 2015-04-01T16:21:55.573+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c56b') } 2015-04-01T16:21:55.573+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c56c') } 2015-04-01T16:21:55.573+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c56d') } 2015-04-01T16:21:55.573+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c56e') } 2015-04-01T16:21:55.573+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c56f') } 2015-04-01T16:21:55.573+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c570') } 2015-04-01T16:21:55.573+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c571') } 2015-04-01T16:21:55.574+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c572') } 2015-04-01T16:21:55.574+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c573') } 2015-04-01T16:21:55.574+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c574') } 2015-04-01T16:21:55.574+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c575') } 2015-04-01T16:21:55.574+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c576') } 2015-04-01T16:21:55.574+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c577') } 2015-04-01T16:21:55.574+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c578') } 2015-04-01T16:21:55.574+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c579') } 2015-04-01T16:21:55.575+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c57a') } 2015-04-01T16:21:55.575+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c57b') } 2015-04-01T16:21:55.575+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c57c') } 2015-04-01T16:21:55.575+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c57d') } 2015-04-01T16:21:55.575+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c57e') } 2015-04-01T16:21:55.575+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c57f') } 2015-04-01T16:21:55.575+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c580') } 2015-04-01T16:21:55.576+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c581') } 2015-04-01T16:21:55.576+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c582') } 2015-04-01T16:21:55.576+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c583') } 2015-04-01T16:21:55.576+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c584') } 2015-04-01T16:21:55.576+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c585') } 2015-04-01T16:21:55.576+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c586') } 2015-04-01T16:21:55.576+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c587') } 2015-04-01T16:21:55.576+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c588') } 2015-04-01T16:21:55.577+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c589') } 2015-04-01T16:21:55.577+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c58a') } 2015-04-01T16:21:55.577+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c58b') } 2015-04-01T16:21:55.577+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c58c') } 2015-04-01T16:21:55.577+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c58d') } 2015-04-01T16:21:55.577+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c58e') } 2015-04-01T16:21:55.577+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c58f') } 2015-04-01T16:21:55.578+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c590') } 2015-04-01T16:21:55.578+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c591') } 2015-04-01T16:21:55.578+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c592') } 2015-04-01T16:21:55.578+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c593') } 2015-04-01T16:21:55.578+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c594') } 2015-04-01T16:21:55.578+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c595') } 2015-04-01T16:21:55.578+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c596') } 2015-04-01T16:21:55.579+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c597') } 2015-04-01T16:21:55.579+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c598') } 2015-04-01T16:21:55.579+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c599') } 2015-04-01T16:21:55.579+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c59a') } 2015-04-01T16:21:55.579+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c59b') } 2015-04-01T16:21:55.579+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c59c') } 2015-04-01T16:21:55.579+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c59d') } 2015-04-01T16:21:55.580+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c59e') } 2015-04-01T16:21:55.580+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c59f') } 2015-04-01T16:21:55.580+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5a0') } 2015-04-01T16:21:55.580+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5a1') } 2015-04-01T16:21:55.580+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5a2') } 2015-04-01T16:21:55.580+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5a3') } 2015-04-01T16:21:55.580+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5a4') } 2015-04-01T16:21:55.580+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5a5') } 2015-04-01T16:21:55.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5a6') } 2015-04-01T16:21:55.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5a7') } 2015-04-01T16:21:55.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5a8') } 2015-04-01T16:21:55.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5a9') } 2015-04-01T16:21:55.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5aa') } 2015-04-01T16:21:55.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5ab') } 2015-04-01T16:21:55.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5ac') } 2015-04-01T16:21:55.582+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5ad') } 2015-04-01T16:21:55.582+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5ae') } 2015-04-01T16:21:55.582+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5af') } 2015-04-01T16:21:55.582+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5b0') } 2015-04-01T16:21:55.582+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5b1') } 2015-04-01T16:21:55.582+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5b2') } 2015-04-01T16:21:55.582+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5b3') } 2015-04-01T16:21:55.582+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5b4') } 2015-04-01T16:21:55.583+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5b5') } 2015-04-01T16:21:55.583+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5b6') } 2015-04-01T16:21:55.583+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5b7') } 2015-04-01T16:21:55.583+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5b8') } 2015-04-01T16:21:55.583+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5b9') } 2015-04-01T16:21:55.583+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5ba') } 2015-04-01T16:21:55.583+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5bb') } 2015-04-01T16:21:55.584+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5bc') } 2015-04-01T16:21:55.584+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5bd') } 2015-04-01T16:21:55.584+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5be') } 2015-04-01T16:21:55.584+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5bf') } 2015-04-01T16:21:55.584+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5c0') } 2015-04-01T16:21:55.584+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5c1') } 2015-04-01T16:21:55.584+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5c2') } 2015-04-01T16:21:55.584+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5c3') } 2015-04-01T16:21:55.585+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5c4') } 2015-04-01T16:21:55.585+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5c5') } 2015-04-01T16:21:55.585+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5c6') } 2015-04-01T16:21:55.585+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5c7') } 2015-04-01T16:21:55.585+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5c8') } 2015-04-01T16:21:55.585+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5c9') } 2015-04-01T16:21:55.585+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5ca') } 2015-04-01T16:21:55.586+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5cb') } 2015-04-01T16:21:55.586+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5cc') } 2015-04-01T16:21:55.586+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5cd') } 2015-04-01T16:21:55.586+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5ce') } 2015-04-01T16:21:55.586+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5cf') } 2015-04-01T16:21:55.586+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5d0') } 2015-04-01T16:21:55.586+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5d1') } 2015-04-01T16:21:55.587+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5d2') } 2015-04-01T16:21:55.587+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5d3') } 2015-04-01T16:21:55.587+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5d4') } 2015-04-01T16:21:55.587+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5d5') } 2015-04-01T16:21:55.587+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5d6') } 2015-04-01T16:21:55.587+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5d7') } 2015-04-01T16:21:55.587+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5d8') } 2015-04-01T16:21:55.587+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5d9') } 2015-04-01T16:21:55.588+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5da') } 2015-04-01T16:21:55.588+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5db') } 2015-04-01T16:21:55.588+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5dc') } 2015-04-01T16:21:55.588+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5dd') } 2015-04-01T16:21:55.588+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5de') } 2015-04-01T16:21:55.588+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5df') } 2015-04-01T16:21:55.588+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5e0') } 2015-04-01T16:21:55.589+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5e1') } 2015-04-01T16:21:55.589+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5e2') } 2015-04-01T16:21:55.589+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5e3') } 2015-04-01T16:21:55.589+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5e4') } 2015-04-01T16:21:55.589+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5e5') } 2015-04-01T16:21:55.589+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5e6') } 2015-04-01T16:21:55.589+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5e7') } 2015-04-01T16:21:55.592+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5e8') } 2015-04-01T16:21:55.592+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5e9') } 2015-04-01T16:21:55.592+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5ea') } 2015-04-01T16:21:55.592+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5eb') } 2015-04-01T16:21:55.592+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5ec') } 2015-04-01T16:21:55.592+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5ed') } 2015-04-01T16:21:55.593+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5ee') } 2015-04-01T16:21:55.593+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5ef') } 2015-04-01T16:21:55.593+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5f0') } 2015-04-01T16:21:55.593+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5f1') } 2015-04-01T16:21:55.593+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5f2') } 2015-04-01T16:21:55.593+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5f3') } 2015-04-01T16:21:55.593+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5f4') } 2015-04-01T16:21:55.593+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5f5') } 2015-04-01T16:21:55.593+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5f6') } 2015-04-01T16:21:55.594+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5f7') } 2015-04-01T16:21:55.594+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5f8') } 2015-04-01T16:21:55.594+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5f9') } 2015-04-01T16:21:55.594+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5fa') } 2015-04-01T16:21:55.594+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5fb') } 2015-04-01T16:21:55.594+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5fc') } 2015-04-01T16:21:55.594+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5fd') } 2015-04-01T16:21:55.594+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5fe') } 2015-04-01T16:21:55.595+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c5ff') } 2015-04-01T16:21:55.595+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c600') } 2015-04-01T16:21:55.595+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c601') } 2015-04-01T16:21:55.595+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c602') } 2015-04-01T16:21:55.595+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c603') } 2015-04-01T16:21:55.595+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c604') } 2015-04-01T16:21:55.596+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c605') } 2015-04-01T16:21:55.596+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c606') } 2015-04-01T16:21:55.596+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c607') } 2015-04-01T16:21:55.596+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c608') } 2015-04-01T16:21:55.596+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c609') } 2015-04-01T16:21:55.596+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c60a') } 2015-04-01T16:21:55.596+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c60b') } 2015-04-01T16:21:55.597+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c60c') } 2015-04-01T16:21:55.597+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c60d') } 2015-04-01T16:21:55.597+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c60e') } 2015-04-01T16:21:55.597+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c60f') } 2015-04-01T16:21:55.597+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c610') } 2015-04-01T16:21:55.597+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c611') } 2015-04-01T16:21:55.597+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c612') } 2015-04-01T16:21:55.598+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c613') } 2015-04-01T16:21:55.598+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c614') } 2015-04-01T16:21:55.598+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c615') } 2015-04-01T16:21:55.599+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c616') } 2015-04-01T16:21:55.599+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c617') } 2015-04-01T16:21:55.599+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c618') } 2015-04-01T16:21:55.599+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c619') } 2015-04-01T16:21:55.599+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c61a') } 2015-04-01T16:21:55.599+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c61b') } 2015-04-01T16:21:55.599+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c61c') } 2015-04-01T16:21:55.600+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c61d') } 2015-04-01T16:21:55.600+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c61e') } 2015-04-01T16:21:55.600+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c61f') } 2015-04-01T16:21:55.600+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c620') } 2015-04-01T16:21:55.600+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c621') } 2015-04-01T16:21:55.600+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c622') } 2015-04-01T16:21:55.601+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c623') } 2015-04-01T16:21:55.601+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c624') } 2015-04-01T16:21:55.601+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c625') } 2015-04-01T16:21:55.601+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c626') } 2015-04-01T16:21:55.602+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c627') } 2015-04-01T16:21:55.602+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c628') } 2015-04-01T16:21:55.602+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c629') } 2015-04-01T16:21:55.602+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c62a') } 2015-04-01T16:21:55.602+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c62b') } 2015-04-01T16:21:55.602+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c62c') } 2015-04-01T16:21:55.602+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c62d') } 2015-04-01T16:21:55.602+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c62e') } 2015-04-01T16:21:55.603+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c62f') } 2015-04-01T16:21:55.603+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c630') } 2015-04-01T16:21:55.603+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c631') } 2015-04-01T16:21:55.603+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c632') } 2015-04-01T16:21:55.603+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c633') } 2015-04-01T16:21:55.603+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c634') } 2015-04-01T16:21:55.603+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c635') } 2015-04-01T16:21:55.604+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c636') } 2015-04-01T16:21:55.604+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c637') } 2015-04-01T16:21:55.604+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c638') } 2015-04-01T16:21:55.604+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c639') } 2015-04-01T16:21:55.604+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c63a') } 2015-04-01T16:21:55.604+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c63b') } 2015-04-01T16:21:55.604+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c63c') } 2015-04-01T16:21:55.605+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c63d') } 2015-04-01T16:21:55.605+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c63e') } 2015-04-01T16:21:55.605+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c63f') } 2015-04-01T16:21:55.605+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c640') } 2015-04-01T16:21:55.605+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c641') } 2015-04-01T16:21:55.605+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c642') } 2015-04-01T16:21:55.605+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c643') } 2015-04-01T16:21:55.605+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c644') } 2015-04-01T16:21:55.606+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c645') } 2015-04-01T16:21:55.606+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c646') } 2015-04-01T16:21:55.606+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c647') } 2015-04-01T16:21:55.606+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c648') } 2015-04-01T16:21:55.606+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c649') } 2015-04-01T16:21:55.606+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c64a') } 2015-04-01T16:21:55.606+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:55.606+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:55.607+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c64b') } 2015-04-01T16:21:55.607+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c64c') } 2015-04-01T16:21:55.607+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c64d') } 2015-04-01T16:21:55.607+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c64e') } 2015-04-01T16:21:55.607+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c64f') } 2015-04-01T16:21:55.607+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c650') } 2015-04-01T16:21:55.607+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c651') } 2015-04-01T16:21:55.607+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c652') } 2015-04-01T16:21:55.608+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c653') } 2015-04-01T16:21:55.608+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c654') } 2015-04-01T16:21:55.608+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c655') } 2015-04-01T16:21:55.608+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c656') } 2015-04-01T16:21:55.608+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c657') } 2015-04-01T16:21:55.608+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c658') } 2015-04-01T16:21:55.608+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c659') } 2015-04-01T16:21:55.609+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c65a') } 2015-04-01T16:21:55.609+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c65b') } 2015-04-01T16:21:55.609+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c65c') } 2015-04-01T16:21:55.609+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c65d') } 2015-04-01T16:21:55.609+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c65e') } 2015-04-01T16:21:55.609+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c65f') } 2015-04-01T16:21:55.609+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c660') } 2015-04-01T16:21:55.610+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c661') } 2015-04-01T16:21:55.610+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c662') } 2015-04-01T16:21:55.610+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c663') } 2015-04-01T16:21:55.610+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c664') } 2015-04-01T16:21:55.610+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c665') } 2015-04-01T16:21:55.610+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c666') } 2015-04-01T16:21:55.610+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c667') } 2015-04-01T16:21:55.610+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c668') } 2015-04-01T16:21:55.611+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c669') } 2015-04-01T16:21:55.611+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c66a') } 2015-04-01T16:21:55.611+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c66b') } 2015-04-01T16:21:55.611+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c66c') } 2015-04-01T16:21:55.611+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c66d') } 2015-04-01T16:21:55.611+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c66e') } 2015-04-01T16:21:55.611+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c66f') } 2015-04-01T16:21:55.612+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c670') } 2015-04-01T16:21:55.612+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c671') } 2015-04-01T16:21:55.612+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c672') } 2015-04-01T16:21:55.612+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c673') } 2015-04-01T16:21:55.612+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c674') } 2015-04-01T16:21:55.612+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c675') } 2015-04-01T16:21:55.612+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c676') } 2015-04-01T16:21:55.613+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c677') } 2015-04-01T16:21:55.613+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c678') } 2015-04-01T16:21:55.613+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c679') } 2015-04-01T16:21:55.613+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c67a') } 2015-04-01T16:21:55.613+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c67b') } 2015-04-01T16:21:55.614+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c67c') } 2015-04-01T16:21:55.614+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c67d') } 2015-04-01T16:21:55.614+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c67e') } 2015-04-01T16:21:55.614+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c67f') } 2015-04-01T16:21:55.614+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c680') } 2015-04-01T16:21:55.614+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c681') } 2015-04-01T16:21:55.614+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c682') } 2015-04-01T16:21:55.614+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c683') } 2015-04-01T16:21:55.615+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c684') } 2015-04-01T16:21:55.615+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c685') } 2015-04-01T16:21:55.615+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c686') } 2015-04-01T16:21:55.615+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c687') } 2015-04-01T16:21:55.615+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c688') } 2015-04-01T16:21:55.615+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c689') } 2015-04-01T16:21:55.615+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c68a') } 2015-04-01T16:21:55.615+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c68b') } 2015-04-01T16:21:55.616+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c68c') } 2015-04-01T16:21:55.616+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c68d') } 2015-04-01T16:21:55.616+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c68e') } 2015-04-01T16:21:55.617+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c68f') } 2015-04-01T16:21:55.617+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c690') } 2015-04-01T16:21:55.617+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c691') } 2015-04-01T16:21:55.617+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c692') } 2015-04-01T16:21:55.617+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c693') } 2015-04-01T16:21:55.617+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c694') } 2015-04-01T16:21:55.617+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c695') } 2015-04-01T16:21:55.618+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c696') } 2015-04-01T16:21:55.618+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c697') } 2015-04-01T16:21:55.618+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c698') } 2015-04-01T16:21:55.618+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c699') } 2015-04-01T16:21:55.618+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c69a') } 2015-04-01T16:21:55.618+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c69b') } 2015-04-01T16:21:55.618+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c69c') } 2015-04-01T16:21:55.619+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c69d') } 2015-04-01T16:21:55.619+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c69e') } 2015-04-01T16:21:55.619+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c69f') } 2015-04-01T16:21:55.619+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6a0') } 2015-04-01T16:21:55.619+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6a1') } 2015-04-01T16:21:55.619+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6a2') } 2015-04-01T16:21:55.619+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6a3') } 2015-04-01T16:21:55.619+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6a4') } 2015-04-01T16:21:55.620+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6a5') } 2015-04-01T16:21:55.620+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6a6') } 2015-04-01T16:21:55.620+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6a7') } 2015-04-01T16:21:55.620+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6a8') } 2015-04-01T16:21:55.620+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6a9') } 2015-04-01T16:21:55.620+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6aa') } 2015-04-01T16:21:55.621+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6ab') } 2015-04-01T16:21:55.621+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6ac') } 2015-04-01T16:21:55.621+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6ad') } 2015-04-01T16:21:55.621+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6ae') } 2015-04-01T16:21:55.621+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6af') } 2015-04-01T16:21:55.621+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6b0') } 2015-04-01T16:21:55.621+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6b1') } 2015-04-01T16:21:55.621+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6b2') } 2015-04-01T16:21:55.622+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6b3') } 2015-04-01T16:21:55.622+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6b4') } 2015-04-01T16:21:55.622+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6b5') } 2015-04-01T16:21:55.622+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6b6') } 2015-04-01T16:21:55.622+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6b7') } 2015-04-01T16:21:55.622+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6b8') } 2015-04-01T16:21:55.622+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6b9') } 2015-04-01T16:21:55.623+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6ba') } 2015-04-01T16:21:55.623+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6bb') } 2015-04-01T16:21:55.623+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6bc') } 2015-04-01T16:21:55.623+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6bd') } 2015-04-01T16:21:55.623+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6be') } 2015-04-01T16:21:55.623+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6bf') } 2015-04-01T16:21:55.623+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6c0') } 2015-04-01T16:21:55.624+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6c1') } 2015-04-01T16:21:55.624+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6c2') } 2015-04-01T16:21:55.624+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6c3') } 2015-04-01T16:21:55.624+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6c4') } 2015-04-01T16:21:55.624+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6c5') } 2015-04-01T16:21:55.624+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6c6') } 2015-04-01T16:21:55.624+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6c7') } 2015-04-01T16:21:55.624+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6c8') } 2015-04-01T16:21:55.625+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6c9') } 2015-04-01T16:21:55.625+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6ca') } 2015-04-01T16:21:55.626+0000 D REPL [rsBackgroundSync] bgsync buffer has 27914 bytes 2015-04-01T16:21:55.626+0000 D REPL [rsBackgroundSync] bgsync buffer has 29624 bytes 2015-04-01T16:21:55.626+0000 D REPL [rsBackgroundSync] bgsync buffer has 31334 bytes 2015-04-01T16:21:55.626+0000 D REPL [rsBackgroundSync] bgsync buffer has 33044 bytes 2015-04-01T16:21:55.626+0000 D REPL [rsBackgroundSync] bgsync buffer has 34754 bytes 2015-04-01T16:21:55.628+0000 D QUERY [rsSync] local.oplog.rs: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:55.631+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.632+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|1768, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.643+0000 D REPL [rsBackgroundSync] bgsync buffer has 9332 bytes 2015-04-01T16:21:55.643+0000 D REPL [rsBackgroundSync] bgsync buffer has 11042 bytes 2015-04-01T16:21:55.644+0000 D REPL [rsBackgroundSync] bgsync buffer has 12752 bytes 2015-04-01T16:21:55.644+0000 D REPL [rsBackgroundSync] bgsync buffer has 14462 bytes 2015-04-01T16:21:55.644+0000 D REPL [rsBackgroundSync] bgsync buffer has 16172 bytes 2015-04-01T16:21:55.644+0000 D REPL [rsBackgroundSync] bgsync buffer has 17882 bytes 2015-04-01T16:21:55.644+0000 D REPL [rsBackgroundSync] bgsync buffer has 19592 bytes 2015-04-01T16:21:55.654+0000 D REPL [rsSync] replication batch size is 238 2015-04-01T16:21:55.656+0000 D REPL [rsBackgroundSync] bgsync buffer has 21302 bytes 2015-04-01T16:21:55.656+0000 D REPL [rsBackgroundSync] bgsync buffer has 23012 bytes 2015-04-01T16:21:55.657+0000 D REPL [rsBackgroundSync] bgsync buffer has 24722 bytes 2015-04-01T16:21:55.657+0000 D REPL [rsBackgroundSync] bgsync buffer has 26432 bytes 2015-04-01T16:21:55.657+0000 D REPL [rsBackgroundSync] bgsync buffer has 28142 bytes 2015-04-01T16:21:55.657+0000 D REPL [rsBackgroundSync] bgsync buffer has 29852 bytes 2015-04-01T16:21:55.657+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6cb') } 2015-04-01T16:21:55.657+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6cc') } 2015-04-01T16:21:55.657+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6cd') } 2015-04-01T16:21:55.657+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6ce') } 2015-04-01T16:21:55.657+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6cf') } 2015-04-01T16:21:55.658+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6d0') } 2015-04-01T16:21:55.658+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6d1') } 2015-04-01T16:21:55.658+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6d2') } 2015-04-01T16:21:55.658+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6d3') } 2015-04-01T16:21:55.658+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6d4') } 2015-04-01T16:21:55.658+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6d5') } 2015-04-01T16:21:55.658+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6d6') } 2015-04-01T16:21:55.659+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6d7') } 2015-04-01T16:21:55.659+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6d8') } 2015-04-01T16:21:55.659+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6d9') } 2015-04-01T16:21:55.659+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6da') } 2015-04-01T16:21:55.659+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6db') } 2015-04-01T16:21:55.659+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6dc') } 2015-04-01T16:21:55.659+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6dd') } 2015-04-01T16:21:55.659+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6de') } 2015-04-01T16:21:55.660+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6df') } 2015-04-01T16:21:55.660+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6e0') } 2015-04-01T16:21:55.660+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6e1') } 2015-04-01T16:21:55.660+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6e2') } 2015-04-01T16:21:55.660+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6e3') } 2015-04-01T16:21:55.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6e4') } 2015-04-01T16:21:55.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6e5') } 2015-04-01T16:21:55.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6e6') } 2015-04-01T16:21:55.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6e7') } 2015-04-01T16:21:55.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6e8') } 2015-04-01T16:21:55.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6e9') } 2015-04-01T16:21:55.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6ea') } 2015-04-01T16:21:55.662+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6eb') } 2015-04-01T16:21:55.662+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6ec') } 2015-04-01T16:21:55.662+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6ed') } 2015-04-01T16:21:55.662+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6ee') } 2015-04-01T16:21:55.662+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6ef') } 2015-04-01T16:21:55.662+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6f0') } 2015-04-01T16:21:55.662+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6f1') } 2015-04-01T16:21:55.663+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6f2') } 2015-04-01T16:21:55.663+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6f3') } 2015-04-01T16:21:55.663+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6f4') } 2015-04-01T16:21:55.663+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6f5') } 2015-04-01T16:21:55.663+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6f6') } 2015-04-01T16:21:55.663+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6f7') } 2015-04-01T16:21:55.663+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6f8') } 2015-04-01T16:21:55.663+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6f9') } 2015-04-01T16:21:55.664+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6fa') } 2015-04-01T16:21:55.664+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6fb') } 2015-04-01T16:21:55.664+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6fc') } 2015-04-01T16:21:55.664+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6fd') } 2015-04-01T16:21:55.664+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6fe') } 2015-04-01T16:21:55.664+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c6ff') } 2015-04-01T16:21:55.664+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c700') } 2015-04-01T16:21:55.665+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c701') } 2015-04-01T16:21:55.665+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c702') } 2015-04-01T16:21:55.665+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c703') } 2015-04-01T16:21:55.665+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c704') } 2015-04-01T16:21:55.665+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c705') } 2015-04-01T16:21:55.665+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c706') } 2015-04-01T16:21:55.665+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c707') } 2015-04-01T16:21:55.666+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c708') } 2015-04-01T16:21:55.666+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c709') } 2015-04-01T16:21:55.666+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c70a') } 2015-04-01T16:21:55.666+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c70b') } 2015-04-01T16:21:55.666+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c70c') } 2015-04-01T16:21:55.666+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c70d') } 2015-04-01T16:21:55.666+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c70e') } 2015-04-01T16:21:55.666+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c70f') } 2015-04-01T16:21:55.667+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c710') } 2015-04-01T16:21:55.667+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c711') } 2015-04-01T16:21:55.667+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c712') } 2015-04-01T16:21:55.667+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c713') } 2015-04-01T16:21:55.667+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c714') } 2015-04-01T16:21:55.667+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c715') } 2015-04-01T16:21:55.667+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c716') } 2015-04-01T16:21:55.668+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c717') } 2015-04-01T16:21:55.668+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c718') } 2015-04-01T16:21:55.668+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c719') } 2015-04-01T16:21:55.668+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c71a') } 2015-04-01T16:21:55.668+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c71b') } 2015-04-01T16:21:55.668+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c71c') } 2015-04-01T16:21:55.668+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c71d') } 2015-04-01T16:21:55.669+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c71e') } 2015-04-01T16:21:55.669+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c71f') } 2015-04-01T16:21:55.669+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c720') } 2015-04-01T16:21:55.669+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c721') } 2015-04-01T16:21:55.669+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c722') } 2015-04-01T16:21:55.669+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c723') } 2015-04-01T16:21:55.669+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c724') } 2015-04-01T16:21:55.670+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c725') } 2015-04-01T16:21:55.670+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c726') } 2015-04-01T16:21:55.670+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c727') } 2015-04-01T16:21:55.670+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c728') } 2015-04-01T16:21:55.670+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c729') } 2015-04-01T16:21:55.670+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c72a') } 2015-04-01T16:21:55.670+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c72b') } 2015-04-01T16:21:55.671+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c72c') } 2015-04-01T16:21:55.671+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c72d') } 2015-04-01T16:21:55.671+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c72e') } 2015-04-01T16:21:55.671+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c72f') } 2015-04-01T16:21:55.671+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c730') } 2015-04-01T16:21:55.671+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c731') } 2015-04-01T16:21:55.671+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c732') } 2015-04-01T16:21:55.671+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c733') } 2015-04-01T16:21:55.672+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c734') } 2015-04-01T16:21:55.672+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c735') } 2015-04-01T16:21:55.672+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c736') } 2015-04-01T16:21:55.672+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c737') } 2015-04-01T16:21:55.672+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c738') } 2015-04-01T16:21:55.672+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c739') } 2015-04-01T16:21:55.673+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c73a') } 2015-04-01T16:21:55.673+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c73b') } 2015-04-01T16:21:55.673+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c73c') } 2015-04-01T16:21:55.673+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c73d') } 2015-04-01T16:21:55.673+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c73e') } 2015-04-01T16:21:55.673+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c73f') } 2015-04-01T16:21:55.673+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c740') } 2015-04-01T16:21:55.673+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c741') } 2015-04-01T16:21:55.674+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c742') } 2015-04-01T16:21:55.674+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c743') } 2015-04-01T16:21:55.674+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c744') } 2015-04-01T16:21:55.674+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c745') } 2015-04-01T16:21:55.674+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c746') } 2015-04-01T16:21:55.674+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c747') } 2015-04-01T16:21:55.674+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c748') } 2015-04-01T16:21:55.675+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c749') } 2015-04-01T16:21:55.675+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c74a') } 2015-04-01T16:21:55.675+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c74b') } 2015-04-01T16:21:55.675+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c74c') } 2015-04-01T16:21:55.675+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c74d') } 2015-04-01T16:21:55.675+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c74e') } 2015-04-01T16:21:55.675+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c74f') } 2015-04-01T16:21:55.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c750') } 2015-04-01T16:21:55.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c751') } 2015-04-01T16:21:55.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c752') } 2015-04-01T16:21:55.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c753') } 2015-04-01T16:21:55.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c754') } 2015-04-01T16:21:55.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c755') } 2015-04-01T16:21:55.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c756') } 2015-04-01T16:21:55.677+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c757') } 2015-04-01T16:21:55.677+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c758') } 2015-04-01T16:21:55.677+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c759') } 2015-04-01T16:21:55.677+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c75a') } 2015-04-01T16:21:55.677+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c75b') } 2015-04-01T16:21:55.677+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c75c') } 2015-04-01T16:21:55.677+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c75d') } 2015-04-01T16:21:55.677+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c75e') } 2015-04-01T16:21:55.678+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c75f') } 2015-04-01T16:21:55.678+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c760') } 2015-04-01T16:21:55.678+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c761') } 2015-04-01T16:21:55.678+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c762') } 2015-04-01T16:21:55.678+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c763') } 2015-04-01T16:21:55.678+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c764') } 2015-04-01T16:21:55.678+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c765') } 2015-04-01T16:21:55.679+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c766') } 2015-04-01T16:21:55.679+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c767') } 2015-04-01T16:21:55.679+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c768') } 2015-04-01T16:21:55.679+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c769') } 2015-04-01T16:21:55.679+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c76a') } 2015-04-01T16:21:55.679+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c76b') } 2015-04-01T16:21:55.679+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c76c') } 2015-04-01T16:21:55.680+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c76d') } 2015-04-01T16:21:55.680+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c76e') } 2015-04-01T16:21:55.680+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c76f') } 2015-04-01T16:21:55.680+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c770') } 2015-04-01T16:21:55.680+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c771') } 2015-04-01T16:21:55.680+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c772') } 2015-04-01T16:21:55.680+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c773') } 2015-04-01T16:21:55.681+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c774') } 2015-04-01T16:21:55.681+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c775') } 2015-04-01T16:21:55.681+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c776') } 2015-04-01T16:21:55.681+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c777') } 2015-04-01T16:21:55.681+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c778') } 2015-04-01T16:21:55.681+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c779') } 2015-04-01T16:21:55.681+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c77a') } 2015-04-01T16:21:55.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c77b') } 2015-04-01T16:21:55.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c77c') } 2015-04-01T16:21:55.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c77d') } 2015-04-01T16:21:55.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c77e') } 2015-04-01T16:21:55.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c77f') } 2015-04-01T16:21:55.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c780') } 2015-04-01T16:21:55.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c781') } 2015-04-01T16:21:55.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c782') } 2015-04-01T16:21:55.683+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c783') } 2015-04-01T16:21:55.683+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c784') } 2015-04-01T16:21:55.683+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c785') } 2015-04-01T16:21:55.683+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c786') } 2015-04-01T16:21:55.683+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c787') } 2015-04-01T16:21:55.683+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c788') } 2015-04-01T16:21:55.683+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c789') } 2015-04-01T16:21:55.684+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c78a') } 2015-04-01T16:21:55.684+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c78b') } 2015-04-01T16:21:55.684+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c78c') } 2015-04-01T16:21:55.684+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c78d') } 2015-04-01T16:21:55.684+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c78e') } 2015-04-01T16:21:55.685+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c78f') } 2015-04-01T16:21:55.685+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c790') } 2015-04-01T16:21:55.685+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c791') } 2015-04-01T16:21:55.685+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c792') } 2015-04-01T16:21:55.685+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c793') } 2015-04-01T16:21:55.685+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c794') } 2015-04-01T16:21:55.685+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c795') } 2015-04-01T16:21:55.685+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c796') } 2015-04-01T16:21:55.686+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c797') } 2015-04-01T16:21:55.686+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c798') } 2015-04-01T16:21:55.686+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c799') } 2015-04-01T16:21:55.686+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c79a') } 2015-04-01T16:21:55.686+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c79b') } 2015-04-01T16:21:55.686+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c79c') } 2015-04-01T16:21:55.686+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c79d') } 2015-04-01T16:21:55.687+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c79e') } 2015-04-01T16:21:55.687+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c79f') } 2015-04-01T16:21:55.687+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7a0') } 2015-04-01T16:21:55.687+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7a1') } 2015-04-01T16:21:55.687+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7a2') } 2015-04-01T16:21:55.687+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7a3') } 2015-04-01T16:21:55.687+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7a4') } 2015-04-01T16:21:55.688+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7a5') } 2015-04-01T16:21:55.688+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7a6') } 2015-04-01T16:21:55.688+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7a7') } 2015-04-01T16:21:55.688+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7a8') } 2015-04-01T16:21:55.688+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7a9') } 2015-04-01T16:21:55.688+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7aa') } 2015-04-01T16:21:55.688+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7ab') } 2015-04-01T16:21:55.689+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7ac') } 2015-04-01T16:21:55.689+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7ad') } 2015-04-01T16:21:55.689+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7ae') } 2015-04-01T16:21:55.689+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7af') } 2015-04-01T16:21:55.689+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7b0') } 2015-04-01T16:21:55.689+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7b1') } 2015-04-01T16:21:55.689+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7b2') } 2015-04-01T16:21:55.689+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7b3') } 2015-04-01T16:21:55.690+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7b4') } 2015-04-01T16:21:55.690+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7b5') } 2015-04-01T16:21:55.690+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7b6') } 2015-04-01T16:21:55.690+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7b7') } 2015-04-01T16:21:55.690+0000 D QUERY [repl writer worker 15] Tests04011621.testcollection: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:55.690+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7b8') } 2015-04-01T16:21:55.712+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.713+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:55.713+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:55.713+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:55.713+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:55.713+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:55.713+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.713+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:55.714+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.714+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:55.714+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:55.714+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:55.714+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:55.715+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.715+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:55.715+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:55.715+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.717+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.729+0000 D REPL [rsBackgroundSync] bgsync buffer has 1254 bytes 2015-04-01T16:21:55.730+0000 D REPL [rsBackgroundSync] bgsync buffer has 2964 bytes 2015-04-01T16:21:55.730+0000 D REPL [rsBackgroundSync] bgsync buffer has 4674 bytes 2015-04-01T16:21:55.730+0000 D REPL [rsBackgroundSync] bgsync buffer has 6384 bytes 2015-04-01T16:21:55.730+0000 D REPL [rsBackgroundSync] bgsync buffer has 8094 bytes 2015-04-01T16:21:55.730+0000 D REPL [rsBackgroundSync] bgsync buffer has 9804 bytes 2015-04-01T16:21:55.730+0000 D REPL [rsBackgroundSync] bgsync buffer has 11514 bytes 2015-04-01T16:21:55.730+0000 D REPL [rsBackgroundSync] bgsync buffer has 13224 bytes 2015-04-01T16:21:55.730+0000 D REPL [rsBackgroundSync] bgsync buffer has 14934 bytes 2015-04-01T16:21:55.730+0000 D REPL [rsBackgroundSync] bgsync buffer has 16644 bytes 2015-04-01T16:21:55.730+0000 D REPL [rsBackgroundSync] bgsync buffer has 18354 bytes 2015-04-01T16:21:55.730+0000 D REPL [rsBackgroundSync] bgsync buffer has 20064 bytes 2015-04-01T16:21:55.730+0000 D REPL [rsBackgroundSync] bgsync buffer has 21774 bytes 2015-04-01T16:21:55.730+0000 D REPL [rsBackgroundSync] bgsync buffer has 23484 bytes 2015-04-01T16:21:55.730+0000 D REPL [rsBackgroundSync] bgsync buffer has 25194 bytes 2015-04-01T16:21:55.731+0000 D REPL [rsBackgroundSync] bgsync buffer has 26904 bytes 2015-04-01T16:21:55.731+0000 D REPL [rsBackgroundSync] bgsync buffer has 28614 bytes 2015-04-01T16:21:55.731+0000 D REPL [rsBackgroundSync] bgsync buffer has 30324 bytes 2015-04-01T16:21:55.731+0000 D REPL [rsBackgroundSync] bgsync buffer has 32034 bytes 2015-04-01T16:21:55.731+0000 D REPL [rsBackgroundSync] bgsync buffer has 33744 bytes 2015-04-01T16:21:55.731+0000 D REPL [rsBackgroundSync] bgsync buffer has 35454 bytes 2015-04-01T16:21:55.731+0000 D REPL [rsBackgroundSync] bgsync buffer has 37164 bytes 2015-04-01T16:21:55.731+0000 D REPL [rsBackgroundSync] bgsync buffer has 38874 bytes 2015-04-01T16:21:55.731+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|2008, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.732+0000 D REPL [rsSync] replication batch size is 264 2015-04-01T16:21:55.764+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7b9') } 2015-04-01T16:21:55.764+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7ba') } 2015-04-01T16:21:55.764+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7bb') } 2015-04-01T16:21:55.764+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7bc') } 2015-04-01T16:21:55.764+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7bd') } 2015-04-01T16:21:55.764+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7be') } 2015-04-01T16:21:55.764+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7bf') } 2015-04-01T16:21:55.765+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7c0') } 2015-04-01T16:21:55.765+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7c1') } 2015-04-01T16:21:55.765+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7c2') } 2015-04-01T16:21:55.765+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7c3') } 2015-04-01T16:21:55.765+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7c4') } 2015-04-01T16:21:55.765+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7c5') } 2015-04-01T16:21:55.765+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7c6') } 2015-04-01T16:21:55.766+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7c7') } 2015-04-01T16:21:55.766+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7c8') } 2015-04-01T16:21:55.766+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7c9') } 2015-04-01T16:21:55.766+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7ca') } 2015-04-01T16:21:55.766+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7cb') } 2015-04-01T16:21:55.766+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7cc') } 2015-04-01T16:21:55.766+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7cd') } 2015-04-01T16:21:55.767+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7ce') } 2015-04-01T16:21:55.767+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7cf') } 2015-04-01T16:21:55.767+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7d0') } 2015-04-01T16:21:55.767+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7d1') } 2015-04-01T16:21:55.767+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7d2') } 2015-04-01T16:21:55.767+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7d3') } 2015-04-01T16:21:55.767+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7d4') } 2015-04-01T16:21:55.768+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7d5') } 2015-04-01T16:21:55.768+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7d6') } 2015-04-01T16:21:55.768+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7d7') } 2015-04-01T16:21:55.768+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7d8') } 2015-04-01T16:21:55.768+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7d9') } 2015-04-01T16:21:55.768+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7da') } 2015-04-01T16:21:55.768+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7db') } 2015-04-01T16:21:55.768+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7dc') } 2015-04-01T16:21:55.769+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7dd') } 2015-04-01T16:21:55.769+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7de') } 2015-04-01T16:21:55.769+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7df') } 2015-04-01T16:21:55.769+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7e0') } 2015-04-01T16:21:55.769+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7e1') } 2015-04-01T16:21:55.769+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7e2') } 2015-04-01T16:21:55.769+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7e3') } 2015-04-01T16:21:55.770+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7e4') } 2015-04-01T16:21:55.770+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7e5') } 2015-04-01T16:21:55.770+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7e6') } 2015-04-01T16:21:55.770+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7e7') } 2015-04-01T16:21:55.770+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7e8') } 2015-04-01T16:21:55.770+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7e9') } 2015-04-01T16:21:55.770+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7ea') } 2015-04-01T16:21:55.771+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7eb') } 2015-04-01T16:21:55.771+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7ec') } 2015-04-01T16:21:55.771+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7ed') } 2015-04-01T16:21:55.771+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7ee') } 2015-04-01T16:21:55.771+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7ef') } 2015-04-01T16:21:55.771+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7f0') } 2015-04-01T16:21:55.771+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7f1') } 2015-04-01T16:21:55.772+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7f2') } 2015-04-01T16:21:55.772+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7f3') } 2015-04-01T16:21:55.772+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7f4') } 2015-04-01T16:21:55.772+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7f5') } 2015-04-01T16:21:55.772+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7f6') } 2015-04-01T16:21:55.772+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7f7') } 2015-04-01T16:21:55.772+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7f8') } 2015-04-01T16:21:55.773+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7f9') } 2015-04-01T16:21:55.773+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7fa') } 2015-04-01T16:21:55.773+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7fb') } 2015-04-01T16:21:55.773+0000 D REPL [rsBackgroundSync] bgsync buffer has 40584 bytes 2015-04-01T16:21:55.774+0000 D REPL [rsBackgroundSync] bgsync buffer has 42294 bytes 2015-04-01T16:21:55.774+0000 D REPL [rsBackgroundSync] bgsync buffer has 44004 bytes 2015-04-01T16:21:55.774+0000 D REPL [rsBackgroundSync] bgsync buffer has 45714 bytes 2015-04-01T16:21:55.774+0000 D REPL [rsBackgroundSync] bgsync buffer has 47424 bytes 2015-04-01T16:21:55.774+0000 D REPL [rsBackgroundSync] bgsync buffer has 49134 bytes 2015-04-01T16:21:55.774+0000 D REPL [rsBackgroundSync] bgsync buffer has 50844 bytes 2015-04-01T16:21:55.774+0000 D REPL [rsBackgroundSync] bgsync buffer has 52554 bytes 2015-04-01T16:21:55.774+0000 D REPL [rsBackgroundSync] bgsync buffer has 54264 bytes 2015-04-01T16:21:55.774+0000 D REPL [rsBackgroundSync] bgsync buffer has 55974 bytes 2015-04-01T16:21:55.774+0000 D REPL [rsBackgroundSync] bgsync buffer has 57684 bytes 2015-04-01T16:21:55.774+0000 D REPL [rsBackgroundSync] bgsync buffer has 59394 bytes 2015-04-01T16:21:55.774+0000 D REPL [rsBackgroundSync] bgsync buffer has 61104 bytes 2015-04-01T16:21:55.774+0000 D REPL [rsBackgroundSync] bgsync buffer has 62814 bytes 2015-04-01T16:21:55.774+0000 D REPL [rsBackgroundSync] bgsync buffer has 64524 bytes 2015-04-01T16:21:55.774+0000 D REPL [rsBackgroundSync] bgsync buffer has 66234 bytes 2015-04-01T16:21:55.775+0000 D REPL [rsBackgroundSync] bgsync buffer has 67944 bytes 2015-04-01T16:21:55.775+0000 D REPL [rsBackgroundSync] bgsync buffer has 69654 bytes 2015-04-01T16:21:55.775+0000 D REPL [rsBackgroundSync] bgsync buffer has 71364 bytes 2015-04-01T16:21:55.775+0000 D REPL [rsBackgroundSync] bgsync buffer has 73074 bytes 2015-04-01T16:21:55.775+0000 D REPL [rsBackgroundSync] bgsync buffer has 74784 bytes 2015-04-01T16:21:55.775+0000 D REPL [rsBackgroundSync] bgsync buffer has 76494 bytes 2015-04-01T16:21:55.775+0000 D REPL [rsBackgroundSync] bgsync buffer has 78204 bytes 2015-04-01T16:21:55.775+0000 D REPL [rsBackgroundSync] bgsync buffer has 79914 bytes 2015-04-01T16:21:55.775+0000 D REPL [rsBackgroundSync] bgsync buffer has 81624 bytes 2015-04-01T16:21:55.775+0000 D REPL [rsBackgroundSync] bgsync buffer has 83334 bytes 2015-04-01T16:21:55.775+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7fc') } 2015-04-01T16:21:55.775+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7fd') } 2015-04-01T16:21:55.776+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7fe') } 2015-04-01T16:21:55.776+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c7ff') } 2015-04-01T16:21:55.776+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c800') } 2015-04-01T16:21:55.776+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c801') } 2015-04-01T16:21:55.776+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c802') } 2015-04-01T16:21:55.776+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c803') } 2015-04-01T16:21:55.776+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c804') } 2015-04-01T16:21:55.776+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c805') } 2015-04-01T16:21:55.777+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c806') } 2015-04-01T16:21:55.777+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c807') } 2015-04-01T16:21:55.777+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c808') } 2015-04-01T16:21:55.777+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c809') } 2015-04-01T16:21:55.777+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c80a') } 2015-04-01T16:21:55.777+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c80b') } 2015-04-01T16:21:55.777+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c80c') } 2015-04-01T16:21:55.778+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c80d') } 2015-04-01T16:21:55.778+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c80e') } 2015-04-01T16:21:55.778+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c80f') } 2015-04-01T16:21:55.778+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c810') } 2015-04-01T16:21:55.778+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c811') } 2015-04-01T16:21:55.778+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c812') } 2015-04-01T16:21:55.778+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c813') } 2015-04-01T16:21:55.779+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c814') } 2015-04-01T16:21:55.779+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c815') } 2015-04-01T16:21:55.779+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c816') } 2015-04-01T16:21:55.779+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c817') } 2015-04-01T16:21:55.779+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c818') } 2015-04-01T16:21:55.779+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c819') } 2015-04-01T16:21:55.779+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c81a') } 2015-04-01T16:21:55.780+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c81b') } 2015-04-01T16:21:55.780+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c81c') } 2015-04-01T16:21:55.780+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c81d') } 2015-04-01T16:21:55.780+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c81e') } 2015-04-01T16:21:55.780+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c81f') } 2015-04-01T16:21:55.780+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c820') } 2015-04-01T16:21:55.780+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c821') } 2015-04-01T16:21:55.780+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c822') } 2015-04-01T16:21:55.781+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c823') } 2015-04-01T16:21:55.781+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c824') } 2015-04-01T16:21:55.781+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c825') } 2015-04-01T16:21:55.781+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c826') } 2015-04-01T16:21:55.781+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c827') } 2015-04-01T16:21:55.781+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c828') } 2015-04-01T16:21:55.781+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c829') } 2015-04-01T16:21:55.782+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c82a') } 2015-04-01T16:21:55.782+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c82b') } 2015-04-01T16:21:55.782+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c82c') } 2015-04-01T16:21:55.782+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c82d') } 2015-04-01T16:21:55.782+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c82e') } 2015-04-01T16:21:55.782+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c82f') } 2015-04-01T16:21:55.782+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c830') } 2015-04-01T16:21:55.783+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c831') } 2015-04-01T16:21:55.783+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c832') } 2015-04-01T16:21:55.783+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c833') } 2015-04-01T16:21:55.783+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c834') } 2015-04-01T16:21:55.783+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c835') } 2015-04-01T16:21:55.783+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c836') } 2015-04-01T16:21:55.783+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:55.783+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:32768 fromFreeList: 1 eloc: 3:1648a000 2015-04-01T16:21:55.784+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c837') } 2015-04-01T16:21:55.784+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c838') } 2015-04-01T16:21:55.784+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c839') } 2015-04-01T16:21:55.784+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c83a') } 2015-04-01T16:21:55.784+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c83b') } 2015-04-01T16:21:55.784+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c83c') } 2015-04-01T16:21:55.784+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c83d') } 2015-04-01T16:21:55.784+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c83e') } 2015-04-01T16:21:55.785+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c83f') } 2015-04-01T16:21:55.785+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c840') } 2015-04-01T16:21:55.785+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c841') } 2015-04-01T16:21:55.785+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c842') } 2015-04-01T16:21:55.785+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c843') } 2015-04-01T16:21:55.785+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c844') } 2015-04-01T16:21:55.785+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c845') } 2015-04-01T16:21:55.786+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c846') } 2015-04-01T16:21:55.786+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c847') } 2015-04-01T16:21:55.786+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c848') } 2015-04-01T16:21:55.786+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c849') } 2015-04-01T16:21:55.786+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c84a') } 2015-04-01T16:21:55.786+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c84b') } 2015-04-01T16:21:55.786+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c84c') } 2015-04-01T16:21:55.787+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c84d') } 2015-04-01T16:21:55.787+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c84e') } 2015-04-01T16:21:55.787+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c84f') } 2015-04-01T16:21:55.787+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c850') } 2015-04-01T16:21:55.787+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c851') } 2015-04-01T16:21:55.787+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c852') } 2015-04-01T16:21:55.787+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c853') } 2015-04-01T16:21:55.787+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c854') } 2015-04-01T16:21:55.788+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c855') } 2015-04-01T16:21:55.788+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c856') } 2015-04-01T16:21:55.788+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c857') } 2015-04-01T16:21:55.788+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c858') } 2015-04-01T16:21:55.788+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c859') } 2015-04-01T16:21:55.788+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c85a') } 2015-04-01T16:21:55.788+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c85b') } 2015-04-01T16:21:55.789+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c85c') } 2015-04-01T16:21:55.789+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c85d') } 2015-04-01T16:21:55.789+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c85e') } 2015-04-01T16:21:55.789+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c85f') } 2015-04-01T16:21:55.789+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c860') } 2015-04-01T16:21:55.789+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c861') } 2015-04-01T16:21:55.789+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c862') } 2015-04-01T16:21:55.790+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c863') } 2015-04-01T16:21:55.790+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c864') } 2015-04-01T16:21:55.790+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c865') } 2015-04-01T16:21:55.790+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c866') } 2015-04-01T16:21:55.790+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c867') } 2015-04-01T16:21:55.790+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c868') } 2015-04-01T16:21:55.790+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c869') } 2015-04-01T16:21:55.790+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c86a') } 2015-04-01T16:21:55.791+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c86b') } 2015-04-01T16:21:55.791+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c86c') } 2015-04-01T16:21:55.791+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c86d') } 2015-04-01T16:21:55.791+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c86e') } 2015-04-01T16:21:55.791+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c86f') } 2015-04-01T16:21:55.791+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c870') } 2015-04-01T16:21:55.791+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c871') } 2015-04-01T16:21:55.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c872') } 2015-04-01T16:21:55.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c873') } 2015-04-01T16:21:55.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c874') } 2015-04-01T16:21:55.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c875') } 2015-04-01T16:21:55.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c876') } 2015-04-01T16:21:55.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c877') } 2015-04-01T16:21:55.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c878') } 2015-04-01T16:21:55.793+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c879') } 2015-04-01T16:21:55.793+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c87a') } 2015-04-01T16:21:55.793+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c87b') } 2015-04-01T16:21:55.793+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c87c') } 2015-04-01T16:21:55.793+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c87d') } 2015-04-01T16:21:55.793+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c87e') } 2015-04-01T16:21:55.793+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c87f') } 2015-04-01T16:21:55.794+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c880') } 2015-04-01T16:21:55.794+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c881') } 2015-04-01T16:21:55.794+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c882') } 2015-04-01T16:21:55.794+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c883') } 2015-04-01T16:21:55.794+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c884') } 2015-04-01T16:21:55.794+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c885') } 2015-04-01T16:21:55.794+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c886') } 2015-04-01T16:21:55.794+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c887') } 2015-04-01T16:21:55.795+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c888') } 2015-04-01T16:21:55.795+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c889') } 2015-04-01T16:21:55.795+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c88a') } 2015-04-01T16:21:55.795+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c88b') } 2015-04-01T16:21:55.795+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c88c') } 2015-04-01T16:21:55.795+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c88d') } 2015-04-01T16:21:55.795+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c88e') } 2015-04-01T16:21:55.796+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c88f') } 2015-04-01T16:21:55.796+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c890') } 2015-04-01T16:21:55.796+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c891') } 2015-04-01T16:21:55.796+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c892') } 2015-04-01T16:21:55.796+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c893') } 2015-04-01T16:21:55.796+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c894') } 2015-04-01T16:21:55.796+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c895') } 2015-04-01T16:21:55.797+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c896') } 2015-04-01T16:21:55.797+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c897') } 2015-04-01T16:21:55.797+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c898') } 2015-04-01T16:21:55.797+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c899') } 2015-04-01T16:21:55.797+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c89a') } 2015-04-01T16:21:55.797+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c89b') } 2015-04-01T16:21:55.797+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c89c') } 2015-04-01T16:21:55.798+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c89d') } 2015-04-01T16:21:55.798+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c89e') } 2015-04-01T16:21:55.798+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c89f') } 2015-04-01T16:21:55.798+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8a0') } 2015-04-01T16:21:55.798+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8a1') } 2015-04-01T16:21:55.798+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8a2') } 2015-04-01T16:21:55.798+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8a3') } 2015-04-01T16:21:55.799+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8a4') } 2015-04-01T16:21:55.799+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8a5') } 2015-04-01T16:21:55.799+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8a6') } 2015-04-01T16:21:55.799+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8a7') } 2015-04-01T16:21:55.799+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8a8') } 2015-04-01T16:21:55.799+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8a9') } 2015-04-01T16:21:55.799+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8aa') } 2015-04-01T16:21:55.799+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8ab') } 2015-04-01T16:21:55.800+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8ac') } 2015-04-01T16:21:55.800+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8ad') } 2015-04-01T16:21:55.800+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8ae') } 2015-04-01T16:21:55.800+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8af') } 2015-04-01T16:21:55.800+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8b0') } 2015-04-01T16:21:55.800+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8b1') } 2015-04-01T16:21:55.800+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8b2') } 2015-04-01T16:21:55.801+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8b3') } 2015-04-01T16:21:55.801+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8b4') } 2015-04-01T16:21:55.801+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8b5') } 2015-04-01T16:21:55.801+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8b6') } 2015-04-01T16:21:55.801+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8b7') } 2015-04-01T16:21:55.801+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8b8') } 2015-04-01T16:21:55.801+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8b9') } 2015-04-01T16:21:55.802+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8ba') } 2015-04-01T16:21:55.802+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8bb') } 2015-04-01T16:21:55.802+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8bc') } 2015-04-01T16:21:55.802+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8bd') } 2015-04-01T16:21:55.802+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8be') } 2015-04-01T16:21:55.802+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8bf') } 2015-04-01T16:21:55.802+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8c0') } 2015-04-01T16:21:55.807+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|2272, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.810+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.833+0000 D REPL [rsSync] replication batch size is 735 2015-04-01T16:21:55.833+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8c1') } 2015-04-01T16:21:55.833+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8c2') } 2015-04-01T16:21:55.833+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8c3') } 2015-04-01T16:21:55.833+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8c4') } 2015-04-01T16:21:55.833+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8c5') } 2015-04-01T16:21:55.834+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8c6') } 2015-04-01T16:21:55.834+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8c7') } 2015-04-01T16:21:55.834+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8c8') } 2015-04-01T16:21:55.834+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8c9') } 2015-04-01T16:21:55.834+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8ca') } 2015-04-01T16:21:55.834+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8cb') } 2015-04-01T16:21:55.834+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8cc') } 2015-04-01T16:21:55.835+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8cd') } 2015-04-01T16:21:55.835+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8ce') } 2015-04-01T16:21:55.835+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8cf') } 2015-04-01T16:21:55.835+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8d0') } 2015-04-01T16:21:55.835+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8d1') } 2015-04-01T16:21:55.835+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8d2') } 2015-04-01T16:21:55.835+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8d3') } 2015-04-01T16:21:55.836+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8d4') } 2015-04-01T16:21:55.836+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8d5') } 2015-04-01T16:21:55.836+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8d6') } 2015-04-01T16:21:55.836+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8d7') } 2015-04-01T16:21:55.836+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8d8') } 2015-04-01T16:21:55.836+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8d9') } 2015-04-01T16:21:55.836+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8da') } 2015-04-01T16:21:55.837+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8db') } 2015-04-01T16:21:55.837+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8dc') } 2015-04-01T16:21:55.837+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8dd') } 2015-04-01T16:21:55.837+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8de') } 2015-04-01T16:21:55.837+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8df') } 2015-04-01T16:21:55.837+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8e0') } 2015-04-01T16:21:55.837+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8e1') } 2015-04-01T16:21:55.838+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8e2') } 2015-04-01T16:21:55.838+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8e3') } 2015-04-01T16:21:55.838+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8e4') } 2015-04-01T16:21:55.838+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8e5') } 2015-04-01T16:21:55.838+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8e6') } 2015-04-01T16:21:55.838+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8e7') } 2015-04-01T16:21:55.838+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8e8') } 2015-04-01T16:21:55.839+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8e9') } 2015-04-01T16:21:55.839+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8ea') } 2015-04-01T16:21:55.839+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8eb') } 2015-04-01T16:21:55.839+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8ec') } 2015-04-01T16:21:55.839+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8ed') } 2015-04-01T16:21:55.839+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8ee') } 2015-04-01T16:21:55.839+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8ef') } 2015-04-01T16:21:55.840+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8f0') } 2015-04-01T16:21:55.840+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8f1') } 2015-04-01T16:21:55.840+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8f2') } 2015-04-01T16:21:55.840+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8f3') } 2015-04-01T16:21:55.840+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8f4') } 2015-04-01T16:21:55.840+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8f5') } 2015-04-01T16:21:55.840+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8f6') } 2015-04-01T16:21:55.840+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8f7') } 2015-04-01T16:21:55.841+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8f8') } 2015-04-01T16:21:55.841+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8f9') } 2015-04-01T16:21:55.841+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8fa') } 2015-04-01T16:21:55.841+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8fb') } 2015-04-01T16:21:55.841+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8fc') } 2015-04-01T16:21:55.841+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8fd') } 2015-04-01T16:21:55.841+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8fe') } 2015-04-01T16:21:55.842+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c8ff') } 2015-04-01T16:21:55.842+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c900') } 2015-04-01T16:21:55.842+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c901') } 2015-04-01T16:21:55.842+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c902') } 2015-04-01T16:21:55.842+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c903') } 2015-04-01T16:21:55.842+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c904') } 2015-04-01T16:21:55.842+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c905') } 2015-04-01T16:21:55.843+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c906') } 2015-04-01T16:21:55.843+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c907') } 2015-04-01T16:21:55.843+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c908') } 2015-04-01T16:21:55.843+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c909') } 2015-04-01T16:21:55.843+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c90a') } 2015-04-01T16:21:55.843+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c90b') } 2015-04-01T16:21:55.843+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c90c') } 2015-04-01T16:21:55.844+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c90d') } 2015-04-01T16:21:55.844+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c90e') } 2015-04-01T16:21:55.844+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c90f') } 2015-04-01T16:21:55.844+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c910') } 2015-04-01T16:21:55.844+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c911') } 2015-04-01T16:21:55.844+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c912') } 2015-04-01T16:21:55.844+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c913') } 2015-04-01T16:21:55.845+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c914') } 2015-04-01T16:21:55.845+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c915') } 2015-04-01T16:21:55.845+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c916') } 2015-04-01T16:21:55.845+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c917') } 2015-04-01T16:21:55.845+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c918') } 2015-04-01T16:21:55.845+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c919') } 2015-04-01T16:21:55.845+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c91a') } 2015-04-01T16:21:55.845+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c91b') } 2015-04-01T16:21:55.846+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c91c') } 2015-04-01T16:21:55.846+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c91d') } 2015-04-01T16:21:55.846+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c91e') } 2015-04-01T16:21:55.846+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c91f') } 2015-04-01T16:21:55.846+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c920') } 2015-04-01T16:21:55.846+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c921') } 2015-04-01T16:21:55.846+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c922') } 2015-04-01T16:21:55.847+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c923') } 2015-04-01T16:21:55.847+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c924') } 2015-04-01T16:21:55.847+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c925') } 2015-04-01T16:21:55.847+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c926') } 2015-04-01T16:21:55.847+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c927') } 2015-04-01T16:21:55.847+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c928') } 2015-04-01T16:21:55.847+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c929') } 2015-04-01T16:21:55.848+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c92a') } 2015-04-01T16:21:55.848+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c92b') } 2015-04-01T16:21:55.848+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c92c') } 2015-04-01T16:21:55.848+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c92d') } 2015-04-01T16:21:55.848+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c92e') } 2015-04-01T16:21:55.848+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c92f') } 2015-04-01T16:21:55.848+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c930') } 2015-04-01T16:21:55.849+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c931') } 2015-04-01T16:21:55.849+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c932') } 2015-04-01T16:21:55.849+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c933') } 2015-04-01T16:21:55.849+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c934') } 2015-04-01T16:21:55.849+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c935') } 2015-04-01T16:21:55.849+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c936') } 2015-04-01T16:21:55.849+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c937') } 2015-04-01T16:21:55.850+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c938') } 2015-04-01T16:21:55.850+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c939') } 2015-04-01T16:21:55.850+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c93a') } 2015-04-01T16:21:55.850+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c93b') } 2015-04-01T16:21:55.850+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c93c') } 2015-04-01T16:21:55.850+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c93d') } 2015-04-01T16:21:55.850+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c93e') } 2015-04-01T16:21:55.851+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c93f') } 2015-04-01T16:21:55.851+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c940') } 2015-04-01T16:21:55.851+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c941') } 2015-04-01T16:21:55.851+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c942') } 2015-04-01T16:21:55.851+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c943') } 2015-04-01T16:21:55.851+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c944') } 2015-04-01T16:21:55.851+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c945') } 2015-04-01T16:21:55.852+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c946') } 2015-04-01T16:21:55.852+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c947') } 2015-04-01T16:21:55.852+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c948') } 2015-04-01T16:21:55.852+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c949') } 2015-04-01T16:21:55.852+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c94a') } 2015-04-01T16:21:55.852+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c94b') } 2015-04-01T16:21:55.852+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c94c') } 2015-04-01T16:21:55.853+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c94d') } 2015-04-01T16:21:55.853+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c94e') } 2015-04-01T16:21:55.853+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c94f') } 2015-04-01T16:21:55.853+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c950') } 2015-04-01T16:21:55.853+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c951') } 2015-04-01T16:21:55.853+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c952') } 2015-04-01T16:21:55.853+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c953') } 2015-04-01T16:21:55.854+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c954') } 2015-04-01T16:21:55.854+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c955') } 2015-04-01T16:21:55.854+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c956') } 2015-04-01T16:21:55.854+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c957') } 2015-04-01T16:21:55.854+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c958') } 2015-04-01T16:21:55.854+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c959') } 2015-04-01T16:21:55.854+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c95a') } 2015-04-01T16:21:55.854+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c95b') } 2015-04-01T16:21:55.855+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c95c') } 2015-04-01T16:21:55.855+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c95d') } 2015-04-01T16:21:55.855+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c95e') } 2015-04-01T16:21:55.855+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c95f') } 2015-04-01T16:21:55.855+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c960') } 2015-04-01T16:21:55.855+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c961') } 2015-04-01T16:21:55.855+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c962') } 2015-04-01T16:21:55.855+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c963') } 2015-04-01T16:21:55.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c964') } 2015-04-01T16:21:55.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c965') } 2015-04-01T16:21:55.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c966') } 2015-04-01T16:21:55.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c967') } 2015-04-01T16:21:55.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c968') } 2015-04-01T16:21:55.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c969') } 2015-04-01T16:21:55.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c96a') } 2015-04-01T16:21:55.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c96b') } 2015-04-01T16:21:55.857+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c96c') } 2015-04-01T16:21:55.857+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c96d') } 2015-04-01T16:21:55.857+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c96e') } 2015-04-01T16:21:55.857+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c96f') } 2015-04-01T16:21:55.857+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c970') } 2015-04-01T16:21:55.857+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c971') } 2015-04-01T16:21:55.857+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c972') } 2015-04-01T16:21:55.857+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c973') } 2015-04-01T16:21:55.858+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c974') } 2015-04-01T16:21:55.858+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c975') } 2015-04-01T16:21:55.858+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c976') } 2015-04-01T16:21:55.858+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c977') } 2015-04-01T16:21:55.858+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c978') } 2015-04-01T16:21:55.858+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c979') } 2015-04-01T16:21:55.858+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c97a') } 2015-04-01T16:21:55.858+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c97b') } 2015-04-01T16:21:55.859+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c97c') } 2015-04-01T16:21:55.859+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c97d') } 2015-04-01T16:21:55.859+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c97e') } 2015-04-01T16:21:55.859+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c97f') } 2015-04-01T16:21:55.859+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c980') } 2015-04-01T16:21:55.859+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c981') } 2015-04-01T16:21:55.859+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c982') } 2015-04-01T16:21:55.859+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c983') } 2015-04-01T16:21:55.860+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c984') } 2015-04-01T16:21:55.860+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c985') } 2015-04-01T16:21:55.860+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c986') } 2015-04-01T16:21:55.860+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c987') } 2015-04-01T16:21:55.860+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c988') } 2015-04-01T16:21:55.860+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c989') } 2015-04-01T16:21:55.860+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c98a') } 2015-04-01T16:21:55.860+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c98b') } 2015-04-01T16:21:55.861+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c98c') } 2015-04-01T16:21:55.861+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c98d') } 2015-04-01T16:21:55.861+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c98e') } 2015-04-01T16:21:55.861+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c98f') } 2015-04-01T16:21:55.861+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c990') } 2015-04-01T16:21:55.861+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c991') } 2015-04-01T16:21:55.861+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c992') } 2015-04-01T16:21:55.862+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c993') } 2015-04-01T16:21:55.862+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c994') } 2015-04-01T16:21:55.862+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c995') } 2015-04-01T16:21:55.862+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c996') } 2015-04-01T16:21:55.862+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c997') } 2015-04-01T16:21:55.862+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c998') } 2015-04-01T16:21:55.862+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c999') } 2015-04-01T16:21:55.863+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c99a') } 2015-04-01T16:21:55.863+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c99b') } 2015-04-01T16:21:55.863+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c99c') } 2015-04-01T16:21:55.863+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c99d') } 2015-04-01T16:21:55.863+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c99e') } 2015-04-01T16:21:55.863+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c99f') } 2015-04-01T16:21:55.864+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9a0') } 2015-04-01T16:21:55.864+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9a1') } 2015-04-01T16:21:55.864+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9a2') } 2015-04-01T16:21:55.864+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9a3') } 2015-04-01T16:21:55.864+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9a4') } 2015-04-01T16:21:55.864+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9a5') } 2015-04-01T16:21:55.864+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9a6') } 2015-04-01T16:21:55.865+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9a7') } 2015-04-01T16:21:55.865+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9a8') } 2015-04-01T16:21:55.865+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9a9') } 2015-04-01T16:21:55.865+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9aa') } 2015-04-01T16:21:55.865+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9ab') } 2015-04-01T16:21:55.865+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9ac') } 2015-04-01T16:21:55.865+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9ad') } 2015-04-01T16:21:55.865+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9ae') } 2015-04-01T16:21:55.866+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9af') } 2015-04-01T16:21:55.866+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9b0') } 2015-04-01T16:21:55.866+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9b1') } 2015-04-01T16:21:55.866+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9b2') } 2015-04-01T16:21:55.866+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9b3') } 2015-04-01T16:21:55.866+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9b4') } 2015-04-01T16:21:55.867+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9b5') } 2015-04-01T16:21:55.867+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9b6') } 2015-04-01T16:21:55.867+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9b7') } 2015-04-01T16:21:55.867+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9b8') } 2015-04-01T16:21:55.867+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9b9') } 2015-04-01T16:21:55.867+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9ba') } 2015-04-01T16:21:55.867+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9bb') } 2015-04-01T16:21:55.868+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9bc') } 2015-04-01T16:21:55.868+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9bd') } 2015-04-01T16:21:55.868+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9be') } 2015-04-01T16:21:55.868+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9bf') } 2015-04-01T16:21:55.869+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9c0') } 2015-04-01T16:21:55.869+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9c1') } 2015-04-01T16:21:55.869+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9c2') } 2015-04-01T16:21:55.869+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9c3') } 2015-04-01T16:21:55.869+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9c4') } 2015-04-01T16:21:55.869+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9c5') } 2015-04-01T16:21:55.870+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9c6') } 2015-04-01T16:21:55.870+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9c7') } 2015-04-01T16:21:55.870+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9c8') } 2015-04-01T16:21:55.870+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9c9') } 2015-04-01T16:21:55.870+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9ca') } 2015-04-01T16:21:55.870+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9cb') } 2015-04-01T16:21:55.870+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9cc') } 2015-04-01T16:21:55.871+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9cd') } 2015-04-01T16:21:55.871+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9ce') } 2015-04-01T16:21:55.871+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9cf') } 2015-04-01T16:21:55.871+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9d0') } 2015-04-01T16:21:55.872+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9d1') } 2015-04-01T16:21:55.872+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9d2') } 2015-04-01T16:21:55.872+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9d3') } 2015-04-01T16:21:55.872+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9d4') } 2015-04-01T16:21:55.872+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9d5') } 2015-04-01T16:21:55.872+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9d6') } 2015-04-01T16:21:55.872+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9d7') } 2015-04-01T16:21:55.873+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9d8') } 2015-04-01T16:21:55.873+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9d9') } 2015-04-01T16:21:55.873+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9da') } 2015-04-01T16:21:55.873+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9db') } 2015-04-01T16:21:55.873+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9dc') } 2015-04-01T16:21:55.873+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9dd') } 2015-04-01T16:21:55.873+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9de') } 2015-04-01T16:21:55.874+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9df') } 2015-04-01T16:21:55.874+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9e0') } 2015-04-01T16:21:55.874+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9e1') } 2015-04-01T16:21:55.874+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9e2') } 2015-04-01T16:21:55.874+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9e3') } 2015-04-01T16:21:55.874+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9e4') } 2015-04-01T16:21:55.874+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9e5') } 2015-04-01T16:21:55.875+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9e6') } 2015-04-01T16:21:55.875+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9e7') } 2015-04-01T16:21:55.875+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9e8') } 2015-04-01T16:21:55.875+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9e9') } 2015-04-01T16:21:55.875+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9ea') } 2015-04-01T16:21:55.875+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9eb') } 2015-04-01T16:21:55.875+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9ec') } 2015-04-01T16:21:55.876+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9ed') } 2015-04-01T16:21:55.876+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9ee') } 2015-04-01T16:21:55.876+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9ef') } 2015-04-01T16:21:55.876+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9f0') } 2015-04-01T16:21:55.876+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9f1') } 2015-04-01T16:21:55.876+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9f2') } 2015-04-01T16:21:55.876+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9f3') } 2015-04-01T16:21:55.877+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9f4') } 2015-04-01T16:21:55.877+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9f5') } 2015-04-01T16:21:55.877+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9f6') } 2015-04-01T16:21:55.877+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9f7') } 2015-04-01T16:21:55.878+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9f8') } 2015-04-01T16:21:55.878+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9f9') } 2015-04-01T16:21:55.878+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9fa') } 2015-04-01T16:21:55.878+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9fb') } 2015-04-01T16:21:55.878+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9fc') } 2015-04-01T16:21:55.878+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9fd') } 2015-04-01T16:21:55.878+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9fe') } 2015-04-01T16:21:55.879+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452c9ff') } 2015-04-01T16:21:55.879+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca00') } 2015-04-01T16:21:55.879+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca01') } 2015-04-01T16:21:55.879+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca02') } 2015-04-01T16:21:55.880+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca03') } 2015-04-01T16:21:55.880+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca04') } 2015-04-01T16:21:55.880+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca05') } 2015-04-01T16:21:55.880+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca06') } 2015-04-01T16:21:55.881+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca07') } 2015-04-01T16:21:55.881+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca08') } 2015-04-01T16:21:55.881+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca09') } 2015-04-01T16:21:55.881+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca0a') } 2015-04-01T16:21:55.881+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca0b') } 2015-04-01T16:21:55.881+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca0c') } 2015-04-01T16:21:55.881+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca0d') } 2015-04-01T16:21:55.882+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca0e') } 2015-04-01T16:21:55.882+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca0f') } 2015-04-01T16:21:55.882+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca10') } 2015-04-01T16:21:55.882+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca11') } 2015-04-01T16:21:55.882+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca12') } 2015-04-01T16:21:55.882+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca13') } 2015-04-01T16:21:55.882+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca14') } 2015-04-01T16:21:55.883+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca15') } 2015-04-01T16:21:55.883+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca16') } 2015-04-01T16:21:55.883+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca17') } 2015-04-01T16:21:55.883+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca18') } 2015-04-01T16:21:55.884+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca19') } 2015-04-01T16:21:55.884+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca1a') } 2015-04-01T16:21:55.884+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca1b') } 2015-04-01T16:21:55.884+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca1c') } 2015-04-01T16:21:55.884+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca1d') } 2015-04-01T16:21:55.884+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca1e') } 2015-04-01T16:21:55.884+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca1f') } 2015-04-01T16:21:55.885+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca20') } 2015-04-01T16:21:55.885+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca21') } 2015-04-01T16:21:55.885+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca22') } 2015-04-01T16:21:55.885+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca23') } 2015-04-01T16:21:55.885+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca24') } 2015-04-01T16:21:55.885+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca25') } 2015-04-01T16:21:55.885+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca26') } 2015-04-01T16:21:55.885+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca27') } 2015-04-01T16:21:55.886+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca28') } 2015-04-01T16:21:55.886+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca29') } 2015-04-01T16:21:55.886+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca2a') } 2015-04-01T16:21:55.886+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca2b') } 2015-04-01T16:21:55.886+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca2c') } 2015-04-01T16:21:55.886+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca2d') } 2015-04-01T16:21:55.887+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca2e') } 2015-04-01T16:21:55.887+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca2f') } 2015-04-01T16:21:55.887+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca30') } 2015-04-01T16:21:55.887+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca31') } 2015-04-01T16:21:55.887+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca32') } 2015-04-01T16:21:55.887+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca33') } 2015-04-01T16:21:55.888+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:55.888+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 2:32c6000 2015-04-01T16:21:55.888+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca34') } 2015-04-01T16:21:55.888+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca35') } 2015-04-01T16:21:55.888+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca36') } 2015-04-01T16:21:55.888+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca37') } 2015-04-01T16:21:55.888+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca38') } 2015-04-01T16:21:55.888+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca39') } 2015-04-01T16:21:55.889+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca3a') } 2015-04-01T16:21:55.889+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca3b') } 2015-04-01T16:21:55.889+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca3c') } 2015-04-01T16:21:55.889+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca3d') } 2015-04-01T16:21:55.889+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca3e') } 2015-04-01T16:21:55.889+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca3f') } 2015-04-01T16:21:55.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca40') } 2015-04-01T16:21:55.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca41') } 2015-04-01T16:21:55.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca42') } 2015-04-01T16:21:55.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca43') } 2015-04-01T16:21:55.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca44') } 2015-04-01T16:21:55.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca45') } 2015-04-01T16:21:55.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca46') } 2015-04-01T16:21:55.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca47') } 2015-04-01T16:21:55.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca48') } 2015-04-01T16:21:55.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca49') } 2015-04-01T16:21:55.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca4a') } 2015-04-01T16:21:55.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca4b') } 2015-04-01T16:21:55.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca4c') } 2015-04-01T16:21:55.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca4d') } 2015-04-01T16:21:55.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca4e') } 2015-04-01T16:21:55.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca4f') } 2015-04-01T16:21:55.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca50') } 2015-04-01T16:21:55.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca51') } 2015-04-01T16:21:55.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca52') } 2015-04-01T16:21:55.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca53') } 2015-04-01T16:21:55.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca54') } 2015-04-01T16:21:55.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca55') } 2015-04-01T16:21:55.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca56') } 2015-04-01T16:21:55.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca57') } 2015-04-01T16:21:55.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca58') } 2015-04-01T16:21:55.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca59') } 2015-04-01T16:21:55.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca5a') } 2015-04-01T16:21:55.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca5b') } 2015-04-01T16:21:55.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca5c') } 2015-04-01T16:21:55.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca5d') } 2015-04-01T16:21:55.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca5e') } 2015-04-01T16:21:55.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca5f') } 2015-04-01T16:21:55.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca60') } 2015-04-01T16:21:55.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca61') } 2015-04-01T16:21:55.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca62') } 2015-04-01T16:21:55.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca63') } 2015-04-01T16:21:55.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca64') } 2015-04-01T16:21:55.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca65') } 2015-04-01T16:21:55.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca66') } 2015-04-01T16:21:55.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca67') } 2015-04-01T16:21:55.895+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca68') } 2015-04-01T16:21:55.895+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca69') } 2015-04-01T16:21:55.895+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca6a') } 2015-04-01T16:21:55.895+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca6b') } 2015-04-01T16:21:55.895+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca6c') } 2015-04-01T16:21:55.895+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca6d') } 2015-04-01T16:21:55.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca6e') } 2015-04-01T16:21:55.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca6f') } 2015-04-01T16:21:55.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca70') } 2015-04-01T16:21:55.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca71') } 2015-04-01T16:21:55.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca72') } 2015-04-01T16:21:55.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca73') } 2015-04-01T16:21:55.897+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca74') } 2015-04-01T16:21:55.897+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca75') } 2015-04-01T16:21:55.897+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca76') } 2015-04-01T16:21:55.897+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca77') } 2015-04-01T16:21:55.897+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca78') } 2015-04-01T16:21:55.897+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca79') } 2015-04-01T16:21:55.898+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca7a') } 2015-04-01T16:21:55.898+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca7b') } 2015-04-01T16:21:55.898+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca7c') } 2015-04-01T16:21:55.898+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca7d') } 2015-04-01T16:21:55.898+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca7e') } 2015-04-01T16:21:55.898+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca7f') } 2015-04-01T16:21:55.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca80') } 2015-04-01T16:21:55.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca81') } 2015-04-01T16:21:55.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca82') } 2015-04-01T16:21:55.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca83') } 2015-04-01T16:21:55.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca84') } 2015-04-01T16:21:55.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca85') } 2015-04-01T16:21:55.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca86') } 2015-04-01T16:21:55.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca87') } 2015-04-01T16:21:55.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca88') } 2015-04-01T16:21:55.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca89') } 2015-04-01T16:21:55.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca8a') } 2015-04-01T16:21:55.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca8b') } 2015-04-01T16:21:55.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca8c') } 2015-04-01T16:21:55.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca8d') } 2015-04-01T16:21:55.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca8e') } 2015-04-01T16:21:55.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca8f') } 2015-04-01T16:21:55.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca90') } 2015-04-01T16:21:55.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca91') } 2015-04-01T16:21:55.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca92') } 2015-04-01T16:21:55.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca93') } 2015-04-01T16:21:55.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca94') } 2015-04-01T16:21:55.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca95') } 2015-04-01T16:21:55.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca96') } 2015-04-01T16:21:55.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca97') } 2015-04-01T16:21:55.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca98') } 2015-04-01T16:21:55.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca99') } 2015-04-01T16:21:55.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca9a') } 2015-04-01T16:21:55.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca9b') } 2015-04-01T16:21:55.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca9c') } 2015-04-01T16:21:55.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca9d') } 2015-04-01T16:21:55.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca9e') } 2015-04-01T16:21:55.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452ca9f') } 2015-04-01T16:21:55.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caa0') } 2015-04-01T16:21:55.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caa1') } 2015-04-01T16:21:55.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caa2') } 2015-04-01T16:21:55.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caa3') } 2015-04-01T16:21:55.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caa4') } 2015-04-01T16:21:55.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caa5') } 2015-04-01T16:21:55.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caa6') } 2015-04-01T16:21:55.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caa7') } 2015-04-01T16:21:55.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caa8') } 2015-04-01T16:21:55.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caa9') } 2015-04-01T16:21:55.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caaa') } 2015-04-01T16:21:55.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caab') } 2015-04-01T16:21:55.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caac') } 2015-04-01T16:21:55.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caad') } 2015-04-01T16:21:55.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caae') } 2015-04-01T16:21:55.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caaf') } 2015-04-01T16:21:55.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cab0') } 2015-04-01T16:21:55.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cab1') } 2015-04-01T16:21:55.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cab2') } 2015-04-01T16:21:55.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cab3') } 2015-04-01T16:21:55.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cab4') } 2015-04-01T16:21:55.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cab5') } 2015-04-01T16:21:55.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cab6') } 2015-04-01T16:21:55.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cab7') } 2015-04-01T16:21:55.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cab8') } 2015-04-01T16:21:55.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cab9') } 2015-04-01T16:21:55.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caba') } 2015-04-01T16:21:55.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cabb') } 2015-04-01T16:21:55.909+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cabc') } 2015-04-01T16:21:55.909+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cabd') } 2015-04-01T16:21:55.909+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cabe') } 2015-04-01T16:21:55.909+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cabf') } 2015-04-01T16:21:55.909+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cac0') } 2015-04-01T16:21:55.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cac1') } 2015-04-01T16:21:55.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cac2') } 2015-04-01T16:21:55.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cac3') } 2015-04-01T16:21:55.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cac4') } 2015-04-01T16:21:55.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cac5') } 2015-04-01T16:21:55.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cac6') } 2015-04-01T16:21:55.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cac7') } 2015-04-01T16:21:55.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cac8') } 2015-04-01T16:21:55.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cac9') } 2015-04-01T16:21:55.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caca') } 2015-04-01T16:21:55.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cacb') } 2015-04-01T16:21:55.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cacc') } 2015-04-01T16:21:55.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cacd') } 2015-04-01T16:21:55.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cace') } 2015-04-01T16:21:55.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cacf') } 2015-04-01T16:21:55.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cad0') } 2015-04-01T16:21:55.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cad1') } 2015-04-01T16:21:55.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cad2') } 2015-04-01T16:21:55.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cad3') } 2015-04-01T16:21:55.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cad4') } 2015-04-01T16:21:55.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cad5') } 2015-04-01T16:21:55.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cad6') } 2015-04-01T16:21:55.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cad7') } 2015-04-01T16:21:55.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cad8') } 2015-04-01T16:21:55.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cad9') } 2015-04-01T16:21:55.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cada') } 2015-04-01T16:21:55.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cadb') } 2015-04-01T16:21:55.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cadc') } 2015-04-01T16:21:55.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cadd') } 2015-04-01T16:21:55.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cade') } 2015-04-01T16:21:55.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cadf') } 2015-04-01T16:21:55.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cae0') } 2015-04-01T16:21:55.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cae1') } 2015-04-01T16:21:55.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cae2') } 2015-04-01T16:21:55.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cae3') } 2015-04-01T16:21:55.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cae4') } 2015-04-01T16:21:55.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cae5') } 2015-04-01T16:21:55.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cae6') } 2015-04-01T16:21:55.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cae7') } 2015-04-01T16:21:55.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cae8') } 2015-04-01T16:21:55.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cae9') } 2015-04-01T16:21:55.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caea') } 2015-04-01T16:21:55.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caeb') } 2015-04-01T16:21:55.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caec') } 2015-04-01T16:21:55.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caed') } 2015-04-01T16:21:55.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caee') } 2015-04-01T16:21:55.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caef') } 2015-04-01T16:21:55.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caf0') } 2015-04-01T16:21:55.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caf1') } 2015-04-01T16:21:55.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caf2') } 2015-04-01T16:21:55.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caf3') } 2015-04-01T16:21:55.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caf4') } 2015-04-01T16:21:55.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caf5') } 2015-04-01T16:21:55.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caf6') } 2015-04-01T16:21:55.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caf7') } 2015-04-01T16:21:55.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caf8') } 2015-04-01T16:21:55.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caf9') } 2015-04-01T16:21:55.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cafa') } 2015-04-01T16:21:55.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cafb') } 2015-04-01T16:21:55.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cafc') } 2015-04-01T16:21:55.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cafd') } 2015-04-01T16:21:55.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cafe') } 2015-04-01T16:21:55.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452caff') } 2015-04-01T16:21:55.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb00') } 2015-04-01T16:21:55.919+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb01') } 2015-04-01T16:21:55.919+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb02') } 2015-04-01T16:21:55.919+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb03') } 2015-04-01T16:21:55.919+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb04') } 2015-04-01T16:21:55.919+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb05') } 2015-04-01T16:21:55.919+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb06') } 2015-04-01T16:21:55.919+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb07') } 2015-04-01T16:21:55.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb08') } 2015-04-01T16:21:55.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb09') } 2015-04-01T16:21:55.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb0a') } 2015-04-01T16:21:55.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb0b') } 2015-04-01T16:21:55.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb0c') } 2015-04-01T16:21:55.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb0d') } 2015-04-01T16:21:55.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb0e') } 2015-04-01T16:21:55.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb0f') } 2015-04-01T16:21:55.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb10') } 2015-04-01T16:21:55.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb11') } 2015-04-01T16:21:55.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb12') } 2015-04-01T16:21:55.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb13') } 2015-04-01T16:21:55.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb14') } 2015-04-01T16:21:55.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb15') } 2015-04-01T16:21:55.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb16') } 2015-04-01T16:21:55.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb17') } 2015-04-01T16:21:55.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb18') } 2015-04-01T16:21:55.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb19') } 2015-04-01T16:21:55.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb1a') } 2015-04-01T16:21:55.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb1b') } 2015-04-01T16:21:55.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb1c') } 2015-04-01T16:21:55.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb1d') } 2015-04-01T16:21:55.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb1e') } 2015-04-01T16:21:55.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb1f') } 2015-04-01T16:21:55.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb20') } 2015-04-01T16:21:55.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb21') } 2015-04-01T16:21:55.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb22') } 2015-04-01T16:21:55.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb23') } 2015-04-01T16:21:55.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb24') } 2015-04-01T16:21:55.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb25') } 2015-04-01T16:21:55.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb26') } 2015-04-01T16:21:55.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb27') } 2015-04-01T16:21:55.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb28') } 2015-04-01T16:21:55.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb29') } 2015-04-01T16:21:55.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb2a') } 2015-04-01T16:21:55.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb2b') } 2015-04-01T16:21:55.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb2c') } 2015-04-01T16:21:55.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb2d') } 2015-04-01T16:21:55.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb2e') } 2015-04-01T16:21:55.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb2f') } 2015-04-01T16:21:55.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb30') } 2015-04-01T16:21:55.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb31') } 2015-04-01T16:21:55.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb32') } 2015-04-01T16:21:55.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb33') } 2015-04-01T16:21:55.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb34') } 2015-04-01T16:21:55.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb35') } 2015-04-01T16:21:55.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb36') } 2015-04-01T16:21:55.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb37') } 2015-04-01T16:21:55.927+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb38') } 2015-04-01T16:21:55.927+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb39') } 2015-04-01T16:21:55.927+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb3a') } 2015-04-01T16:21:55.927+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb3b') } 2015-04-01T16:21:55.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb3c') } 2015-04-01T16:21:55.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb3d') } 2015-04-01T16:21:55.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb3e') } 2015-04-01T16:21:55.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb3f') } 2015-04-01T16:21:55.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb40') } 2015-04-01T16:21:55.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb41') } 2015-04-01T16:21:55.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb42') } 2015-04-01T16:21:55.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb43') } 2015-04-01T16:21:55.929+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb44') } 2015-04-01T16:21:55.930+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb45') } 2015-04-01T16:21:55.930+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb46') } 2015-04-01T16:21:55.930+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb47') } 2015-04-01T16:21:55.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb48') } 2015-04-01T16:21:55.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb49') } 2015-04-01T16:21:55.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb4a') } 2015-04-01T16:21:55.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb4b') } 2015-04-01T16:21:55.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb4c') } 2015-04-01T16:21:55.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb4d') } 2015-04-01T16:21:55.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb4e') } 2015-04-01T16:21:55.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb4f') } 2015-04-01T16:21:55.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb50') } 2015-04-01T16:21:55.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb51') } 2015-04-01T16:21:55.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb52') } 2015-04-01T16:21:55.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb53') } 2015-04-01T16:21:55.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb54') } 2015-04-01T16:21:55.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb55') } 2015-04-01T16:21:55.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb56') } 2015-04-01T16:21:55.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb57') } 2015-04-01T16:21:55.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb58') } 2015-04-01T16:21:55.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb59') } 2015-04-01T16:21:55.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb5a') } 2015-04-01T16:21:55.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb5b') } 2015-04-01T16:21:55.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb5c') } 2015-04-01T16:21:55.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb5d') } 2015-04-01T16:21:55.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb5e') } 2015-04-01T16:21:55.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb5f') } 2015-04-01T16:21:55.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb60') } 2015-04-01T16:21:55.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb61') } 2015-04-01T16:21:55.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb62') } 2015-04-01T16:21:55.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb63') } 2015-04-01T16:21:55.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb64') } 2015-04-01T16:21:55.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb65') } 2015-04-01T16:21:55.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb66') } 2015-04-01T16:21:55.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb67') } 2015-04-01T16:21:55.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb68') } 2015-04-01T16:21:55.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb69') } 2015-04-01T16:21:55.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb6a') } 2015-04-01T16:21:55.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb6b') } 2015-04-01T16:21:55.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb6c') } 2015-04-01T16:21:55.936+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb6d') } 2015-04-01T16:21:55.936+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb6e') } 2015-04-01T16:21:55.936+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb6f') } 2015-04-01T16:21:55.936+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb70') } 2015-04-01T16:21:55.936+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb71') } 2015-04-01T16:21:55.936+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb72') } 2015-04-01T16:21:55.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb73') } 2015-04-01T16:21:55.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb74') } 2015-04-01T16:21:55.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb75') } 2015-04-01T16:21:55.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb76') } 2015-04-01T16:21:55.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb77') } 2015-04-01T16:21:55.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb78') } 2015-04-01T16:21:55.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb79') } 2015-04-01T16:21:55.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb7a') } 2015-04-01T16:21:55.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb7b') } 2015-04-01T16:21:55.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb7c') } 2015-04-01T16:21:55.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb7d') } 2015-04-01T16:21:55.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb7e') } 2015-04-01T16:21:55.939+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb7f') } 2015-04-01T16:21:55.939+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb80') } 2015-04-01T16:21:55.939+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb81') } 2015-04-01T16:21:55.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb82') } 2015-04-01T16:21:55.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb83') } 2015-04-01T16:21:55.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb84') } 2015-04-01T16:21:55.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb85') } 2015-04-01T16:21:55.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb86') } 2015-04-01T16:21:55.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb87') } 2015-04-01T16:21:55.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb88') } 2015-04-01T16:21:55.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb89') } 2015-04-01T16:21:55.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb8a') } 2015-04-01T16:21:55.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb8b') } 2015-04-01T16:21:55.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb8c') } 2015-04-01T16:21:55.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb8d') } 2015-04-01T16:21:55.942+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb8e') } 2015-04-01T16:21:55.942+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb8f') } 2015-04-01T16:21:55.942+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb90') } 2015-04-01T16:21:55.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb91') } 2015-04-01T16:21:55.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb92') } 2015-04-01T16:21:55.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb93') } 2015-04-01T16:21:55.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb94') } 2015-04-01T16:21:55.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb95') } 2015-04-01T16:21:55.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb96') } 2015-04-01T16:21:55.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb97') } 2015-04-01T16:21:55.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb98') } 2015-04-01T16:21:55.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb99') } 2015-04-01T16:21:55.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb9a') } 2015-04-01T16:21:55.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb9b') } 2015-04-01T16:21:55.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb9c') } 2015-04-01T16:21:55.945+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb9d') } 2015-04-01T16:21:55.945+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb9e') } 2015-04-01T16:21:55.945+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cb9f') } 2015-04-01T16:21:55.946+0000 D QUERY [rsSync] local.oplog.rs: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:55.948+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.948+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|3007, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.949+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:55.949+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "duplicatekeys" } 2015-04-01T16:21:55.949+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.duplicatekeys {} 2015-04-01T16:21:55.949+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 3:16492000 2015-04-01T16:21:55.949+0000 D STORAGE [repl writer worker 15] Tests04011621.duplicatekeys: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.949+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:55.949+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:55.950+0000 D STORAGE [repl writer worker 15] Tests04011621.duplicatekeys: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.950+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.950+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|3008, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.951+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:55.951+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:55.951+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.951+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|3009, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.952+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:55.952+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "notcappedcollection" } 2015-04-01T16:21:55.952+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.notcappedcollection {} 2015-04-01T16:21:55.953+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 0 eloc: 3:16494000 2015-04-01T16:21:55.953+0000 D STORAGE [repl writer worker 15] Tests04011621.notcappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.953+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:55.953+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 3:16496000 2015-04-01T16:21:55.953+0000 D STORAGE [repl writer worker 15] Tests04011621.notcappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.954+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.954+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|3010, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.954+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:55.954+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "cappedcollection" } 2015-04-01T16:21:55.954+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.cappedcollection 2015-04-01T16:21:55.954+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.cappedcollection 2015-04-01T16:21:55.954+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.cappedcollection" } 2015-04-01T16:21:55.954+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.954+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:55.956+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|3011, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.956+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.957+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:55.957+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "cappedcollection", capped: true, size: 10000 } 2015-04-01T16:21:55.958+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.cappedcollection { capped: true, size: 10000 } 2015-04-01T16:21:55.958+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:10240 fromFreeList: 0 eloc: 3:164b6000 2015-04-01T16:21:55.958+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.958+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:55.958+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6c5000 2015-04-01T16:21:55.958+0000 D STORAGE [repl writer worker 15] Tests04011621.cappedcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.958+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|3012, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.958+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.959+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:55.959+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cba0') } 2015-04-01T16:21:55.959+0000 D QUERY [repl writer worker 15] Tests04011621.testcollection: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:55.959+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cba1') } 2015-04-01T16:21:55.960+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|3014, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.960+0000 D REPL [rsBackgroundSync] bgsync buffer has 349 bytes 2015-04-01T16:21:55.960+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.961+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:55.961+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:55.961+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:55.962+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:55.962+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:55.962+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.962+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:55.962+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|3015, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.962+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.964+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:55.965+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:55.965+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:55.965+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:55.965+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.965+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:55.965+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 2:32c6000 2015-04-01T16:21:55.965+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:55.966+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|3016, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:55.966+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:55.966+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:55.966+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cba2') } 2015-04-01T16:21:55.967+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cba3') } 2015-04-01T16:21:55.967+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cba4') } 2015-04-01T16:21:55.967+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905315000|3019, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.010+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.011+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.011+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "tmp.mr.testcollection_1", temp: true } 2015-04-01T16:21:56.011+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.tmp.mr.testcollection_1 { temp: true } 2015-04-01T16:21:56.011+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:ed000 2015-04-01T16:21:56.011+0000 D STORAGE [repl writer worker 15] Tests04011621.tmp.mr.testcollection_1: clearing plan cache - collection info cache reset 2015-04-01T16:21:56.011+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:56.011+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:6ef000 2015-04-01T16:21:56.011+0000 D STORAGE [repl writer worker 15] Tests04011621.tmp.mr.testcollection_1: clearing plan cache - collection info cache reset 2015-04-01T16:21:56.013+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.014+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.017+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:56.017+0000 D QUERY [repl writer worker 15] Using idhack: { _id: "A" } 2015-04-01T16:21:56.017+0000 D QUERY [repl writer worker 15] Using idhack: { _id: "B" } 2015-04-01T16:21:56.017+0000 D QUERY [repl writer worker 15] Using idhack: { _id: "C" } 2015-04-01T16:21:56.018+0000 D QUERY [repl writer worker 15] Using idhack: { _id: "X" } 2015-04-01T16:21:56.018+0000 D QUERY [repl writer worker 15] Using idhack: { _id: "_id" } 2015-04-01T16:21:56.018+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.018+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.019+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.019+0000 D COMMAND [repl writer worker 15] run command admin.$cmd { renameCollection: "Tests04011621.tmp.mr.testcollection_1", to: "Tests04011621.mrout", stayTemp: false } 2015-04-01T16:21:56.019+0000 D COMMAND [repl writer worker 15] command: { renameCollection: "Tests04011621.tmp.mr.testcollection_1", to: "Tests04011621.mrout", stayTemp: false } 2015-04-01T16:21:56.020+0000 D STORAGE [repl writer worker 15] Tests04011621.mrout: clearing plan cache - collection info cache reset 2015-04-01T16:21:56.020+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.052+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.052+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cba2') } 2015-04-01T16:21:56.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cba3') } 2015-04-01T16:21:56.053+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b23e15b5605d452cba4') } 2015-04-01T16:21:56.053+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.054+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.055+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.055+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b24e15b5605d452cba5') } 2015-04-01T16:21:56.056+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|11, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.057+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.058+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b24e15b5605d452cba6') } 2015-04-01T16:21:56.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b24e15b5605d452cba7') } 2015-04-01T16:21:56.058+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.075+0000 D REPL [rsBackgroundSync] bgsync buffer has 111 bytes 2015-04-01T16:21:56.075+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.077+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.077+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b24e15b5605d452cba5') } 2015-04-01T16:21:56.077+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b24e15b5605d452cba6') } 2015-04-01T16:21:56.078+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.078+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.079+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.079+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b24e15b5605d452cba7') } 2015-04-01T16:21:56.079+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b24e15b5605d452cba8') } 2015-04-01T16:21:56.081+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.097+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.098+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.098+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b24e15b5605d452cba8') } 2015-04-01T16:21:56.099+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.101+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.101+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.101+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b24e15b5605d452cba9') } 2015-04-01T16:21:56.101+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b24e15b5605d452cbaa') } 2015-04-01T16:21:56.102+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|20, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.104+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.104+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.104+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b24e15b5605d452cbab') } 2015-04-01T16:21:56.104+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.115+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.116+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.116+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:56.116+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:56.116+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:56.117+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:56.117+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:56.117+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:56.117+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.118+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.118+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.118+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:56.118+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:56.119+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:56.119+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:56.119+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:56.119+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 2:32c6000 2015-04-01T16:21:56.119+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:56.119+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|23, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.119+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.120+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.120+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 0 } 2015-04-01T16:21:56.120+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:56.121+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|25, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.123+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.123+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.123+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:56.123+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:56.124+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|27, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.125+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.126+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.126+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:56.126+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:56.127+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|29, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.128+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:56.129+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.129+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.129+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:56.129+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 7 } 2015-04-01T16:21:56.130+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|31, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.132+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.132+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.133+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 8 } 2015-04-01T16:21:56.133+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 9 } 2015-04-01T16:21:56.133+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:56.133+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|34, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.135+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.135+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.135+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 11 } 2015-04-01T16:21:56.135+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 12 } 2015-04-01T16:21:56.135+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|36, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.138+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.138+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.138+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 13 } 2015-04-01T16:21:56.139+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 14 } 2015-04-01T16:21:56.139+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|38, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.140+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.141+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.141+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 15 } 2015-04-01T16:21:56.141+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 16 } 2015-04-01T16:21:56.141+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|40, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.143+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.144+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.144+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 17 } 2015-04-01T16:21:56.145+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 18 } 2015-04-01T16:21:56.145+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|42, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.146+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:56.146+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.147+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.147+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 19 } 2015-04-01T16:21:56.148+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 20 } 2015-04-01T16:21:56.148+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 21 } 2015-04-01T16:21:56.148+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|45, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.149+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.150+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.150+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 22 } 2015-04-01T16:21:56.150+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|46, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.153+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.153+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.153+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 23 } 2015-04-01T16:21:56.154+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 24 } 2015-04-01T16:21:56.154+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|48, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.155+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.156+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.156+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 25 } 2015-04-01T16:21:56.157+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|49, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.158+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.159+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 26 } 2015-04-01T16:21:56.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 27 } 2015-04-01T16:21:56.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 28 } 2015-04-01T16:21:56.160+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|52, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.162+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.162+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 29 } 2015-04-01T16:21:56.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 30 } 2015-04-01T16:21:56.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 31 } 2015-04-01T16:21:56.164+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|55, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.165+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.166+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 32 } 2015-04-01T16:21:56.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 33 } 2015-04-01T16:21:56.167+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|57, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.168+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.168+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 34 } 2015-04-01T16:21:56.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 35 } 2015-04-01T16:21:56.169+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|59, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.171+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:56.171+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.172+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.172+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 36 } 2015-04-01T16:21:56.172+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 37 } 2015-04-01T16:21:56.172+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|61, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.174+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.174+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 38 } 2015-04-01T16:21:56.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 39 } 2015-04-01T16:21:56.175+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|63, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.177+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.177+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 40 } 2015-04-01T16:21:56.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 41 } 2015-04-01T16:21:56.178+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|65, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.180+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.180+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.181+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 42 } 2015-04-01T16:21:56.181+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|66, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.183+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.184+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.184+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 43 } 2015-04-01T16:21:56.184+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|67, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.186+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.188+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.188+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 44 } 2015-04-01T16:21:56.188+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 45 } 2015-04-01T16:21:56.189+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|69, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.189+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.190+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.190+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 46 } 2015-04-01T16:21:56.190+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 47 } 2015-04-01T16:21:56.190+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|71, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.193+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.194+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 48 } 2015-04-01T16:21:56.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 49 } 2015-04-01T16:21:56.195+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|73, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.196+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.197+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.197+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.198+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 50 } 2015-04-01T16:21:56.198+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 51 } 2015-04-01T16:21:56.198+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|75, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.199+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.200+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.200+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 52 } 2015-04-01T16:21:56.200+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 53 } 2015-04-01T16:21:56.200+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|77, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.204+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.204+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.204+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 54 } 2015-04-01T16:21:56.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 55 } 2015-04-01T16:21:56.205+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|79, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.206+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.206+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.207+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 56 } 2015-04-01T16:21:56.207+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 57 } 2015-04-01T16:21:56.207+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 58 } 2015-04-01T16:21:56.208+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|82, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.209+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.210+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.210+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 59 } 2015-04-01T16:21:56.210+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|83, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.212+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.213+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 60 } 2015-04-01T16:21:56.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 61 } 2015-04-01T16:21:56.214+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|85, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.216+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.216+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 62 } 2015-04-01T16:21:56.217+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|86, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.218+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.218+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 63 } 2015-04-01T16:21:56.219+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|87, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.221+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.222+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.222+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 64 } 2015-04-01T16:21:56.223+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 65 } 2015-04-01T16:21:56.223+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|89, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.224+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:56.224+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.225+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.226+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 66 } 2015-04-01T16:21:56.226+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|90, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.227+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.228+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.228+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 67 } 2015-04-01T16:21:56.228+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 68 } 2015-04-01T16:21:56.229+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|92, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.230+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.231+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.231+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 69 } 2015-04-01T16:21:56.232+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|93, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.233+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.234+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.234+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 70 } 2015-04-01T16:21:56.234+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 71 } 2015-04-01T16:21:56.234+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|95, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.236+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.237+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.237+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 72 } 2015-04-01T16:21:56.237+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 73 } 2015-04-01T16:21:56.238+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|97, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.239+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.240+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.240+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 74 } 2015-04-01T16:21:56.240+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 75 } 2015-04-01T16:21:56.241+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|99, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.242+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.242+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.242+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 76 } 2015-04-01T16:21:56.243+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|100, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.246+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.247+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.247+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 77 } 2015-04-01T16:21:56.247+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 78 } 2015-04-01T16:21:56.248+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 79 } 2015-04-01T16:21:56.249+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.249+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|103, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.250+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.251+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.251+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 80 } 2015-04-01T16:21:56.251+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 81 } 2015-04-01T16:21:56.252+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 82 } 2015-04-01T16:21:56.252+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|106, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.253+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.253+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.253+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 83 } 2015-04-01T16:21:56.254+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 84 } 2015-04-01T16:21:56.254+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|108, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.255+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.256+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.256+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 85 } 2015-04-01T16:21:56.257+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|109, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.258+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.259+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.259+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 86 } 2015-04-01T16:21:56.259+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 87 } 2015-04-01T16:21:56.259+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|111, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.261+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.262+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.262+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 88 } 2015-04-01T16:21:56.263+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|112, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.264+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.265+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 89 } 2015-04-01T16:21:56.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 90 } 2015-04-01T16:21:56.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 91 } 2015-04-01T16:21:56.267+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|115, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.268+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.268+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.269+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 92 } 2015-04-01T16:21:56.269+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 93 } 2015-04-01T16:21:56.269+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|117, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.270+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.270+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 94 } 2015-04-01T16:21:56.271+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|118, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.274+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.274+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.275+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.275+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 95 } 2015-04-01T16:21:56.275+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 96 } 2015-04-01T16:21:56.275+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|120, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.279+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.280+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.280+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 97 } 2015-04-01T16:21:56.280+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 98 } 2015-04-01T16:21:56.280+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 99 } 2015-04-01T16:21:56.280+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|123, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.281+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.281+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.282+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 100 } 2015-04-01T16:21:56.282+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 101 } 2015-04-01T16:21:56.282+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 102 } 2015-04-01T16:21:56.282+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|126, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.285+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.285+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.286+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 103 } 2015-04-01T16:21:56.286+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|127, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.289+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.289+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.290+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 104 } 2015-04-01T16:21:56.290+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 105 } 2015-04-01T16:21:56.291+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 106 } 2015-04-01T16:21:56.291+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|130, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.291+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.292+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.293+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 107 } 2015-04-01T16:21:56.294+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 108 } 2015-04-01T16:21:56.294+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 109 } 2015-04-01T16:21:56.294+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.295+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|133, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.296+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.296+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.297+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 110 } 2015-04-01T16:21:56.297+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 111 } 2015-04-01T16:21:56.297+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 112 } 2015-04-01T16:21:56.298+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|136, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.299+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.300+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.300+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 113 } 2015-04-01T16:21:56.300+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 114 } 2015-04-01T16:21:56.301+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.301+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|138, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.301+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.302+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 115 } 2015-04-01T16:21:56.302+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 116 } 2015-04-01T16:21:56.302+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 117 } 2015-04-01T16:21:56.302+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|141, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.304+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.305+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.306+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 118 } 2015-04-01T16:21:56.306+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|142, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.307+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.308+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.309+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 119 } 2015-04-01T16:21:56.309+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 120 } 2015-04-01T16:21:56.309+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|144, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.311+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.311+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.311+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 121 } 2015-04-01T16:21:56.312+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 122 } 2015-04-01T16:21:56.312+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|146, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.314+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.314+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.315+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 123 } 2015-04-01T16:21:56.316+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 124 } 2015-04-01T16:21:56.316+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.316+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|148, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.317+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.317+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.317+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 125 } 2015-04-01T16:21:56.318+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 126 } 2015-04-01T16:21:56.318+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 127 } 2015-04-01T16:21:56.318+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|151, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.334+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.346+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.346+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 128 } 2015-04-01T16:21:56.346+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|152, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.348+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.348+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.348+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 129 } 2015-04-01T16:21:56.349+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|153, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.352+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.352+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.352+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 130 } 2015-04-01T16:21:56.352+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|154, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.355+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.355+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.374+0000 D REPL [rsBackgroundSync] bgsync buffer has 792 bytes 2015-04-01T16:21:56.374+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 131 } 2015-04-01T16:21:56.377+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 132 } 2015-04-01T16:21:56.380+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.380+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|156, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.382+0000 D REPL [rsSync] replication batch size is 18 2015-04-01T16:21:56.382+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 133 } 2015-04-01T16:21:56.383+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 134 } 2015-04-01T16:21:56.383+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 135 } 2015-04-01T16:21:56.383+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 136 } 2015-04-01T16:21:56.383+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 137 } 2015-04-01T16:21:56.383+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 138 } 2015-04-01T16:21:56.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 139 } 2015-04-01T16:21:56.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 140 } 2015-04-01T16:21:56.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 141 } 2015-04-01T16:21:56.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 142 } 2015-04-01T16:21:56.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 143 } 2015-04-01T16:21:56.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 144 } 2015-04-01T16:21:56.385+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 145 } 2015-04-01T16:21:56.385+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 146 } 2015-04-01T16:21:56.385+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 147 } 2015-04-01T16:21:56.385+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 148 } 2015-04-01T16:21:56.385+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 149 } 2015-04-01T16:21:56.386+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 150 } 2015-04-01T16:21:56.386+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.386+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|174, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.387+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.387+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 151 } 2015-04-01T16:21:56.387+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 152 } 2015-04-01T16:21:56.388+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 153 } 2015-04-01T16:21:56.388+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|177, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.389+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.390+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.390+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 154 } 2015-04-01T16:21:56.390+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 155 } 2015-04-01T16:21:56.390+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|179, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.392+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:56.393+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.393+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.394+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 156 } 2015-04-01T16:21:56.394+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|180, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.396+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.396+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.396+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 157 } 2015-04-01T16:21:56.396+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 158 } 2015-04-01T16:21:56.396+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|182, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.399+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.400+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.400+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 159 } 2015-04-01T16:21:56.401+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 160 } 2015-04-01T16:21:56.401+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 161 } 2015-04-01T16:21:56.401+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|185, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.402+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.402+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.402+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 162 } 2015-04-01T16:21:56.403+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|186, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.406+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.407+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.407+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 163 } 2015-04-01T16:21:56.407+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 164 } 2015-04-01T16:21:56.407+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 165 } 2015-04-01T16:21:56.408+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|189, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.411+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.411+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.411+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 166 } 2015-04-01T16:21:56.412+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 167 } 2015-04-01T16:21:56.412+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|191, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.414+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.414+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.415+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 168 } 2015-04-01T16:21:56.415+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 169 } 2015-04-01T16:21:56.415+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 170 } 2015-04-01T16:21:56.415+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|194, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.417+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:56.417+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.418+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.419+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 171 } 2015-04-01T16:21:56.419+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 172 } 2015-04-01T16:21:56.419+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 173 } 2015-04-01T16:21:56.420+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|197, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.421+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.421+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.421+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 174 } 2015-04-01T16:21:56.421+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 175 } 2015-04-01T16:21:56.422+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|199, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.423+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.423+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.424+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 176 } 2015-04-01T16:21:56.424+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|200, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.426+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.426+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.427+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 177 } 2015-04-01T16:21:56.428+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 178 } 2015-04-01T16:21:56.428+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 179 } 2015-04-01T16:21:56.428+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|203, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.429+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.430+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.431+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 180 } 2015-04-01T16:21:56.431+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 181 } 2015-04-01T16:21:56.432+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 182 } 2015-04-01T16:21:56.432+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|206, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.432+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.432+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.432+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 183 } 2015-04-01T16:21:56.434+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 184 } 2015-04-01T16:21:56.434+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|208, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.435+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.435+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.436+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.437+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 185 } 2015-04-01T16:21:56.437+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 186 } 2015-04-01T16:21:56.437+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|210, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.438+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.438+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.438+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 187 } 2015-04-01T16:21:56.439+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|211, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.442+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.442+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.442+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 188 } 2015-04-01T16:21:56.442+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 189 } 2015-04-01T16:21:56.442+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 190 } 2015-04-01T16:21:56.443+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|214, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.445+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.446+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.446+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 191 } 2015-04-01T16:21:56.446+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 192 } 2015-04-01T16:21:56.446+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|216, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.448+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.449+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.449+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 193 } 2015-04-01T16:21:56.449+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 194 } 2015-04-01T16:21:56.450+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|218, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.451+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.451+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.452+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 195 } 2015-04-01T16:21:56.453+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 196 } 2015-04-01T16:21:56.453+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|220, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.455+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.456+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.456+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 197 } 2015-04-01T16:21:56.456+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 198 } 2015-04-01T16:21:56.457+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:56.457+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|222, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.457+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.458+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.458+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 199 } 2015-04-01T16:21:56.459+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 200 } 2015-04-01T16:21:56.459+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 201 } 2015-04-01T16:21:56.459+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|225, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.460+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.461+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.461+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 202 } 2015-04-01T16:21:56.461+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|226, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.463+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.464+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.464+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 203 } 2015-04-01T16:21:56.465+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 204 } 2015-04-01T16:21:56.465+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|228, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.466+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.467+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.467+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 205 } 2015-04-01T16:21:56.467+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 206 } 2015-04-01T16:21:56.467+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|230, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.469+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.472+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.473+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 207 } 2015-04-01T16:21:56.474+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 208 } 2015-04-01T16:21:56.474+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|232, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.474+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.476+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.476+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 209 } 2015-04-01T16:21:56.477+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 210 } 2015-04-01T16:21:56.477+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 211 } 2015-04-01T16:21:56.478+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|235, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.478+0000 D REPL [rsBackgroundSync] bgsync buffer has 396 bytes 2015-04-01T16:21:56.478+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.479+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:56.479+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 212 } 2015-04-01T16:21:56.479+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 213 } 2015-04-01T16:21:56.479+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 214 } 2015-04-01T16:21:56.479+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 215 } 2015-04-01T16:21:56.479+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 216 } 2015-04-01T16:21:56.480+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|240, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.482+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.482+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.482+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 217 } 2015-04-01T16:21:56.482+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|241, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.484+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.484+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.485+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 218 } 2015-04-01T16:21:56.485+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 219 } 2015-04-01T16:21:56.485+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|243, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.487+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.488+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.488+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 220 } 2015-04-01T16:21:56.489+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|244, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.490+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.491+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.491+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 221 } 2015-04-01T16:21:56.491+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 222 } 2015-04-01T16:21:56.491+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|246, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.493+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.493+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.494+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 223 } 2015-04-01T16:21:56.494+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|247, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.496+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.496+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.497+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 224 } 2015-04-01T16:21:56.499+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 225 } 2015-04-01T16:21:56.500+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 226 } 2015-04-01T16:21:56.501+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|250, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.501+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.502+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.502+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.503+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 227 } 2015-04-01T16:21:56.504+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 228 } 2015-04-01T16:21:56.504+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 229 } 2015-04-01T16:21:56.504+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|253, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.505+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.506+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.506+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 230 } 2015-04-01T16:21:56.506+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 231 } 2015-04-01T16:21:56.506+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|255, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.507+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.509+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.509+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 232 } 2015-04-01T16:21:56.509+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 233 } 2015-04-01T16:21:56.509+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|257, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.509+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.509+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.509+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 234 } 2015-04-01T16:21:56.509+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 235 } 2015-04-01T16:21:56.509+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|259, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.511+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.511+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.511+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 236 } 2015-04-01T16:21:56.512+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|260, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.514+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.515+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.515+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 237 } 2015-04-01T16:21:56.515+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 238 } 2015-04-01T16:21:56.515+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 239 } 2015-04-01T16:21:56.515+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|263, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.518+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.518+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.518+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 240 } 2015-04-01T16:21:56.519+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 241 } 2015-04-01T16:21:56.519+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|265, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.520+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.521+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.521+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 242 } 2015-04-01T16:21:56.521+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 243 } 2015-04-01T16:21:56.521+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|267, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.523+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.525+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.525+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 244 } 2015-04-01T16:21:56.525+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|268, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.527+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.527+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.528+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 245 } 2015-04-01T16:21:56.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 246 } 2015-04-01T16:21:56.529+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|270, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.533+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.534+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 247 } 2015-04-01T16:21:56.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 248 } 2015-04-01T16:21:56.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 249 } 2015-04-01T16:21:56.536+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|273, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.536+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.537+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.537+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 250 } 2015-04-01T16:21:56.537+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:56.540+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:32768 fromFreeList: 1 eloc: 3:1648a000 2015-04-01T16:21:56.541+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 251 } 2015-04-01T16:21:56.541+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|275, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.541+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.542+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.542+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 252 } 2015-04-01T16:21:56.542+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 253 } 2015-04-01T16:21:56.542+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 254 } 2015-04-01T16:21:56.542+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|278, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.543+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.543+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.543+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 255 } 2015-04-01T16:21:56.543+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|279, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.546+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.546+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.546+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 256 } 2015-04-01T16:21:56.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 257 } 2015-04-01T16:21:56.547+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|281, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.550+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.550+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.550+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 258 } 2015-04-01T16:21:56.550+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 259 } 2015-04-01T16:21:56.550+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|283, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.552+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.553+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.553+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.553+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 260 } 2015-04-01T16:21:56.553+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 261 } 2015-04-01T16:21:56.554+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|285, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.555+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.556+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.556+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 262 } 2015-04-01T16:21:56.556+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|286, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.558+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.559+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.559+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 263 } 2015-04-01T16:21:56.559+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|287, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.561+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.562+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.562+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 264 } 2015-04-01T16:21:56.562+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 265 } 2015-04-01T16:21:56.562+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 266 } 2015-04-01T16:21:56.563+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|290, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.564+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.565+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.565+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 267 } 2015-04-01T16:21:56.565+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 268 } 2015-04-01T16:21:56.565+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|292, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.567+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.568+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.568+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 269 } 2015-04-01T16:21:56.568+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 270 } 2015-04-01T16:21:56.568+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|294, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.570+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.571+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.571+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 271 } 2015-04-01T16:21:56.571+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 272 } 2015-04-01T16:21:56.571+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|296, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.573+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.573+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.573+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 273 } 2015-04-01T16:21:56.573+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 274 } 2015-04-01T16:21:56.573+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|298, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.576+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.576+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.577+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.577+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 275 } 2015-04-01T16:21:56.577+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 276 } 2015-04-01T16:21:56.578+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|300, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.579+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.580+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 277 } 2015-04-01T16:21:56.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 278 } 2015-04-01T16:21:56.581+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|302, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.582+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.584+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.584+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 279 } 2015-04-01T16:21:56.584+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 280 } 2015-04-01T16:21:56.584+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 281 } 2015-04-01T16:21:56.585+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|305, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.586+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.586+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.586+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 282 } 2015-04-01T16:21:56.587+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 283 } 2015-04-01T16:21:56.587+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|307, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.589+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.589+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.590+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 284 } 2015-04-01T16:21:56.590+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 285 } 2015-04-01T16:21:56.591+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|309, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.591+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.592+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.593+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 286 } 2015-04-01T16:21:56.593+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 287 } 2015-04-01T16:21:56.593+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 288 } 2015-04-01T16:21:56.594+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|312, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.594+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:56.596+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.596+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.597+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 289 } 2015-04-01T16:21:56.597+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 290 } 2015-04-01T16:21:56.597+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 291 } 2015-04-01T16:21:56.597+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|315, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.599+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.600+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.600+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 292 } 2015-04-01T16:21:56.600+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 293 } 2015-04-01T16:21:56.600+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 294 } 2015-04-01T16:21:56.601+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|318, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.602+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.603+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.603+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 295 } 2015-04-01T16:21:56.603+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 296 } 2015-04-01T16:21:56.603+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|320, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.605+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.605+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.605+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 297 } 2015-04-01T16:21:56.606+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 298 } 2015-04-01T16:21:56.606+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 299 } 2015-04-01T16:21:56.606+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|323, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.608+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.608+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.608+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 300 } 2015-04-01T16:21:56.608+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 301 } 2015-04-01T16:21:56.608+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 302 } 2015-04-01T16:21:56.608+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|326, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.611+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.612+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.612+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 303 } 2015-04-01T16:21:56.612+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|327, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.614+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.614+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.615+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 304 } 2015-04-01T16:21:56.615+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|328, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.618+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.618+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.618+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.619+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 305 } 2015-04-01T16:21:56.619+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|329, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.619+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.619+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.620+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 306 } 2015-04-01T16:21:56.620+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|330, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.621+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.622+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.622+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 307 } 2015-04-01T16:21:56.622+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|331, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.624+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.625+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.625+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 308 } 2015-04-01T16:21:56.625+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 309 } 2015-04-01T16:21:56.625+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|333, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.627+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.628+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.628+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 310 } 2015-04-01T16:21:56.628+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 311 } 2015-04-01T16:21:56.628+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 312 } 2015-04-01T16:21:56.628+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|336, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.630+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.631+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.631+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 313 } 2015-04-01T16:21:56.631+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 314 } 2015-04-01T16:21:56.631+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|338, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.633+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.634+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.634+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 315 } 2015-04-01T16:21:56.634+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 316 } 2015-04-01T16:21:56.635+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|340, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.637+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.638+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:56.638+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 317 } 2015-04-01T16:21:56.638+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 318 } 2015-04-01T16:21:56.639+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 319 } 2015-04-01T16:21:56.639+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 320 } 2015-04-01T16:21:56.639+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|344, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.640+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:56.641+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.641+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.641+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 321 } 2015-04-01T16:21:56.641+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 322 } 2015-04-01T16:21:56.642+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 323 } 2015-04-01T16:21:56.642+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|347, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.644+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.645+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.645+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 324 } 2015-04-01T16:21:56.645+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 325 } 2015-04-01T16:21:56.645+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 326 } 2015-04-01T16:21:56.647+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|350, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.648+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.649+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.649+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 327 } 2015-04-01T16:21:56.649+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 328 } 2015-04-01T16:21:56.649+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 329 } 2015-04-01T16:21:56.649+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|353, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.650+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.651+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.652+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 330 } 2015-04-01T16:21:56.652+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|354, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.653+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.653+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.653+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 331 } 2015-04-01T16:21:56.654+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 332 } 2015-04-01T16:21:56.654+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|356, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.657+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.657+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.658+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 333 } 2015-04-01T16:21:56.658+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 334 } 2015-04-01T16:21:56.658+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 335 } 2015-04-01T16:21:56.658+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|359, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.659+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:56.659+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.660+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 336 } 2015-04-01T16:21:56.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 337 } 2015-04-01T16:21:56.662+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|361, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.662+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.663+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.664+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 338 } 2015-04-01T16:21:56.664+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 339 } 2015-04-01T16:21:56.665+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 340 } 2015-04-01T16:21:56.665+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|364, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.666+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.667+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.667+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 341 } 2015-04-01T16:21:56.667+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 342 } 2015-04-01T16:21:56.668+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 343 } 2015-04-01T16:21:56.668+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|367, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.670+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.670+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.670+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 344 } 2015-04-01T16:21:56.670+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 345 } 2015-04-01T16:21:56.670+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 346 } 2015-04-01T16:21:56.671+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|370, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.674+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.674+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.675+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 347 } 2015-04-01T16:21:56.675+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 348 } 2015-04-01T16:21:56.675+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|372, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.676+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:56.676+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.676+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.678+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 349 } 2015-04-01T16:21:56.678+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 350 } 2015-04-01T16:21:56.678+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 351 } 2015-04-01T16:21:56.678+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|375, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.680+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.680+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.681+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 352 } 2015-04-01T16:21:56.681+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 353 } 2015-04-01T16:21:56.681+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 354 } 2015-04-01T16:21:56.681+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|378, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.682+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.683+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.683+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 355 } 2015-04-01T16:21:56.683+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 356 } 2015-04-01T16:21:56.683+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|380, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.685+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.686+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.686+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 357 } 2015-04-01T16:21:56.686+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 358 } 2015-04-01T16:21:56.687+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|382, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.688+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.690+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.690+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 359 } 2015-04-01T16:21:56.690+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 360 } 2015-04-01T16:21:56.690+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 361 } 2015-04-01T16:21:56.690+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|385, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.691+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.692+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.692+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 362 } 2015-04-01T16:21:56.693+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 363 } 2015-04-01T16:21:56.693+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|387, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.694+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.694+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.694+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 364 } 2015-04-01T16:21:56.695+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 365 } 2015-04-01T16:21:56.695+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|389, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.697+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:56.697+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.698+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.698+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 366 } 2015-04-01T16:21:56.699+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 367 } 2015-04-01T16:21:56.699+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|391, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.700+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.700+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.700+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 368 } 2015-04-01T16:21:56.701+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 369 } 2015-04-01T16:21:56.701+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|393, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.703+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.703+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.703+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 370 } 2015-04-01T16:21:56.704+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 371 } 2015-04-01T16:21:56.704+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|395, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.706+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.707+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.707+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 372 } 2015-04-01T16:21:56.708+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 373 } 2015-04-01T16:21:56.708+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 374 } 2015-04-01T16:21:56.708+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|398, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.709+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.710+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.710+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 375 } 2015-04-01T16:21:56.710+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 376 } 2015-04-01T16:21:56.710+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|400, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.712+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.713+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.713+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 377 } 2015-04-01T16:21:56.714+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 378 } 2015-04-01T16:21:56.714+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 379 } 2015-04-01T16:21:56.714+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|403, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.716+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.716+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.717+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.717+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 380 } 2015-04-01T16:21:56.718+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 381 } 2015-04-01T16:21:56.718+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 382 } 2015-04-01T16:21:56.718+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|406, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.721+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.721+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.721+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 383 } 2015-04-01T16:21:56.721+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 384 } 2015-04-01T16:21:56.722+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|408, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.723+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.723+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.724+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 385 } 2015-04-01T16:21:56.724+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 386 } 2015-04-01T16:21:56.724+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 387 } 2015-04-01T16:21:56.724+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|411, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.726+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.726+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.726+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 388 } 2015-04-01T16:21:56.726+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 389 } 2015-04-01T16:21:56.727+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|413, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.729+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.729+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.729+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 390 } 2015-04-01T16:21:56.729+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 391 } 2015-04-01T16:21:56.730+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 392 } 2015-04-01T16:21:56.730+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|416, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.731+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.732+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.732+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 393 } 2015-04-01T16:21:56.732+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|417, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.734+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.734+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.735+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 394 } 2015-04-01T16:21:56.735+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 395 } 2015-04-01T16:21:56.735+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|419, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.738+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:56.738+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.739+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:56.740+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 396 } 2015-04-01T16:21:56.740+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 397 } 2015-04-01T16:21:56.740+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 398 } 2015-04-01T16:21:56.740+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 399 } 2015-04-01T16:21:56.740+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|423, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.741+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.742+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.742+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 400 } 2015-04-01T16:21:56.742+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 401 } 2015-04-01T16:21:56.743+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 402 } 2015-04-01T16:21:56.743+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|426, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.745+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.745+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.746+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 403 } 2015-04-01T16:21:56.746+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 404 } 2015-04-01T16:21:56.746+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|428, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.748+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.749+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.749+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 405 } 2015-04-01T16:21:56.749+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 406 } 2015-04-01T16:21:56.749+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 407 } 2015-04-01T16:21:56.750+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|431, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.751+0000 D REPL [rsBackgroundSync] bgsync buffer has 297 bytes 2015-04-01T16:21:56.751+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.752+0000 D COMMAND [conn19] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:56.752+0000 D COMMAND [conn19] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:56.752+0000 I COMMAND [conn19] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:56.753+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:56.753+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 408 } 2015-04-01T16:21:56.753+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 409 } 2015-04-01T16:21:56.753+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 410 } 2015-04-01T16:21:56.753+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 411 } 2015-04-01T16:21:56.754+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|435, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.755+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.755+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.755+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 412 } 2015-04-01T16:21:56.755+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 413 } 2015-04-01T16:21:56.756+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|437, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.759+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.759+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.759+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 414 } 2015-04-01T16:21:56.759+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 415 } 2015-04-01T16:21:56.760+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|439, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.762+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.763+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:56.764+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 416 } 2015-04-01T16:21:56.764+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 417 } 2015-04-01T16:21:56.764+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 418 } 2015-04-01T16:21:56.764+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 419 } 2015-04-01T16:21:56.766+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|443, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.767+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.768+0000 D REPL [rsSync] replication batch size is 5 2015-04-01T16:21:56.769+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 420 } 2015-04-01T16:21:56.769+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 421 } 2015-04-01T16:21:56.769+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 422 } 2015-04-01T16:21:56.769+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 423 } 2015-04-01T16:21:56.769+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 424 } 2015-04-01T16:21:56.770+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|448, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.771+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.772+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.772+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:56.773+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 425 } 2015-04-01T16:21:56.773+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 426 } 2015-04-01T16:21:56.774+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 427 } 2015-04-01T16:21:56.774+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 428 } 2015-04-01T16:21:56.775+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.775+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|452, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.776+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.777+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 429 } 2015-04-01T16:21:56.777+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 430 } 2015-04-01T16:21:56.777+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 431 } 2015-04-01T16:21:56.778+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|455, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.778+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.779+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.779+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 432 } 2015-04-01T16:21:56.779+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 433 } 2015-04-01T16:21:56.780+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|457, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.781+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.781+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.781+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 434 } 2015-04-01T16:21:56.781+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 435 } 2015-04-01T16:21:56.782+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 436 } 2015-04-01T16:21:56.782+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|460, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.783+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.784+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.784+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 437 } 2015-04-01T16:21:56.784+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|461, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.787+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.787+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.787+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 438 } 2015-04-01T16:21:56.788+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 439 } 2015-04-01T16:21:56.788+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 440 } 2015-04-01T16:21:56.788+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|464, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.790+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:56.790+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.791+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:56.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 441 } 2015-04-01T16:21:56.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 442 } 2015-04-01T16:21:56.792+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 443 } 2015-04-01T16:21:56.793+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 444 } 2015-04-01T16:21:56.793+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|468, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.793+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.795+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.795+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 445 } 2015-04-01T16:21:56.795+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 446 } 2015-04-01T16:21:56.795+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|470, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.797+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.797+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.798+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 447 } 2015-04-01T16:21:56.798+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 448 } 2015-04-01T16:21:56.798+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|472, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.800+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.801+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.801+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 449 } 2015-04-01T16:21:56.802+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 450 } 2015-04-01T16:21:56.802+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|474, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.803+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.803+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.804+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 451 } 2015-04-01T16:21:56.804+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 452 } 2015-04-01T16:21:56.804+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 453 } 2015-04-01T16:21:56.805+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|477, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.806+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.807+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.807+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 454 } 2015-04-01T16:21:56.808+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 455 } 2015-04-01T16:21:56.808+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|479, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.809+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:56.809+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.810+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.810+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 456 } 2015-04-01T16:21:56.810+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 457 } 2015-04-01T16:21:56.811+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 458 } 2015-04-01T16:21:56.811+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|482, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.812+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.812+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.812+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 459 } 2015-04-01T16:21:56.813+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|483, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.815+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.816+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.816+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 460 } 2015-04-01T16:21:56.817+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 461 } 2015-04-01T16:21:56.817+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 462 } 2015-04-01T16:21:56.817+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|486, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.818+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.819+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.819+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 463 } 2015-04-01T16:21:56.819+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 464 } 2015-04-01T16:21:56.819+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 465 } 2015-04-01T16:21:56.820+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|489, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.821+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.822+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.822+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 466 } 2015-04-01T16:21:56.823+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 467 } 2015-04-01T16:21:56.823+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|491, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.824+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.825+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.825+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 468 } 2015-04-01T16:21:56.825+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 469 } 2015-04-01T16:21:56.825+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|493, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.827+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.827+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.829+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.829+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 470 } 2015-04-01T16:21:56.830+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 471 } 2015-04-01T16:21:56.831+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|495, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.831+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.832+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.833+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 472 } 2015-04-01T16:21:56.833+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 473 } 2015-04-01T16:21:56.833+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|497, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.834+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.834+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.834+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 474 } 2015-04-01T16:21:56.835+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 475 } 2015-04-01T16:21:56.835+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|499, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.836+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.837+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.837+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 476 } 2015-04-01T16:21:56.837+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|500, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.839+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:56.839+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:56.840+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.840+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:21:58.840Z 2015-04-01T16:21:56.842+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.842+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 477 } 2015-04-01T16:21:56.842+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 478 } 2015-04-01T16:21:56.842+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 479 } 2015-04-01T16:21:56.843+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|503, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.843+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.844+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.845+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 480 } 2015-04-01T16:21:56.845+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 481 } 2015-04-01T16:21:56.845+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 482 } 2015-04-01T16:21:56.846+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.846+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|506, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.847+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.847+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 483 } 2015-04-01T16:21:56.847+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 484 } 2015-04-01T16:21:56.847+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 485 } 2015-04-01T16:21:56.848+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|509, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.849+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:56.849+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.850+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.850+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 486 } 2015-04-01T16:21:56.850+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 487 } 2015-04-01T16:21:56.850+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 488 } 2015-04-01T16:21:56.850+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|512, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.852+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.852+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.852+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 489 } 2015-04-01T16:21:56.853+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|513, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.855+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.855+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 490 } 2015-04-01T16:21:56.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 491 } 2015-04-01T16:21:56.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 492 } 2015-04-01T16:21:56.857+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|516, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.858+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.859+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.860+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 493 } 2015-04-01T16:21:56.860+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 494 } 2015-04-01T16:21:56.860+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 495 } 2015-04-01T16:21:56.860+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|519, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.861+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.862+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.862+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 496 } 2015-04-01T16:21:56.862+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 497 } 2015-04-01T16:21:56.862+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|521, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.864+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.865+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.865+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 498 } 2015-04-01T16:21:56.865+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 499 } 2015-04-01T16:21:56.866+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|523, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.867+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.867+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.867+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.868+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 500 } 2015-04-01T16:21:56.868+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.868+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|524, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.869+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.869+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 501 } 2015-04-01T16:21:56.869+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|525, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.870+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.871+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.871+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 502 } 2015-04-01T16:21:56.871+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 503 } 2015-04-01T16:21:56.871+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|527, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.874+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.874+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.874+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 504 } 2015-04-01T16:21:56.874+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 505 } 2015-04-01T16:21:56.874+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|529, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.877+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.877+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.878+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 506 } 2015-04-01T16:21:56.878+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 507 } 2015-04-01T16:21:56.878+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 508 } 2015-04-01T16:21:56.878+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|532, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.879+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.880+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.880+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 509 } 2015-04-01T16:21:56.880+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 510 } 2015-04-01T16:21:56.881+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|534, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.884+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.884+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.885+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 511 } 2015-04-01T16:21:56.885+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 512 } 2015-04-01T16:21:56.885+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 513 } 2015-04-01T16:21:56.885+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|537, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:56.887+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.889+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:56.889+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 514 } 2015-04-01T16:21:56.889+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 515 } 2015-04-01T16:21:56.889+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 516 } 2015-04-01T16:21:56.889+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 517 } 2015-04-01T16:21:56.891+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|541, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.891+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.892+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:56.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 518 } 2015-04-01T16:21:56.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 519 } 2015-04-01T16:21:56.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 520 } 2015-04-01T16:21:56.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 521 } 2015-04-01T16:21:56.894+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|545, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.895+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.896+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 522 } 2015-04-01T16:21:56.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 523 } 2015-04-01T16:21:56.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 524 } 2015-04-01T16:21:56.896+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|548, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.898+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.899+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 525 } 2015-04-01T16:21:56.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 526 } 2015-04-01T16:21:56.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 527 } 2015-04-01T16:21:56.901+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|551, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.903+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.904+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 528 } 2015-04-01T16:21:56.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 529 } 2015-04-01T16:21:56.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 530 } 2015-04-01T16:21:56.905+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|554, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.905+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:56.906+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.906+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 531 } 2015-04-01T16:21:56.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 532 } 2015-04-01T16:21:56.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 533 } 2015-04-01T16:21:56.907+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|557, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.910+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.911+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 534 } 2015-04-01T16:21:56.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 535 } 2015-04-01T16:21:56.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 536 } 2015-04-01T16:21:56.912+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|560, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.913+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.914+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 537 } 2015-04-01T16:21:56.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 538 } 2015-04-01T16:21:56.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 539 } 2015-04-01T16:21:56.915+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|563, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.916+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.917+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:56.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 540 } 2015-04-01T16:21:56.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 541 } 2015-04-01T16:21:56.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 542 } 2015-04-01T16:21:56.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 543 } 2015-04-01T16:21:56.918+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|567, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.920+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:56.921+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.922+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 544 } 2015-04-01T16:21:56.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 545 } 2015-04-01T16:21:56.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 546 } 2015-04-01T16:21:56.923+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.923+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|570, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.925+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 547 } 2015-04-01T16:21:56.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 548 } 2015-04-01T16:21:56.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 549 } 2015-04-01T16:21:56.926+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|573, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.926+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.927+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.927+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 550 } 2015-04-01T16:21:56.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 551 } 2015-04-01T16:21:56.928+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|575, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.930+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.931+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 552 } 2015-04-01T16:21:56.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 553 } 2015-04-01T16:21:56.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 554 } 2015-04-01T16:21:56.932+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|578, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.934+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.935+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 555 } 2015-04-01T16:21:56.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 556 } 2015-04-01T16:21:56.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 557 } 2015-04-01T16:21:56.935+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|581, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.937+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.937+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 558 } 2015-04-01T16:21:56.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 559 } 2015-04-01T16:21:56.938+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|583, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.941+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:56.941+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.941+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 560 } 2015-04-01T16:21:56.942+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|584, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.942+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.943+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 561 } 2015-04-01T16:21:56.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 562 } 2015-04-01T16:21:56.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 563 } 2015-04-01T16:21:56.944+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|587, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.945+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.947+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.947+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 564 } 2015-04-01T16:21:56.947+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 565 } 2015-04-01T16:21:56.948+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 566 } 2015-04-01T16:21:56.949+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|590, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.949+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.950+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.950+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 567 } 2015-04-01T16:21:56.950+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 568 } 2015-04-01T16:21:56.951+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|592, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.951+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.951+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.952+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 569 } 2015-04-01T16:21:56.952+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 570 } 2015-04-01T16:21:56.952+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 571 } 2015-04-01T16:21:56.952+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|595, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.954+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.954+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.954+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 572 } 2015-04-01T16:21:56.955+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|596, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.957+0000 D REPL [rsBackgroundSync] bgsync buffer has 297 bytes 2015-04-01T16:21:56.957+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.958+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:56.958+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 573 } 2015-04-01T16:21:56.958+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 574 } 2015-04-01T16:21:56.958+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 575 } 2015-04-01T16:21:56.959+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 576 } 2015-04-01T16:21:56.959+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|600, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.961+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.961+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.962+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 577 } 2015-04-01T16:21:56.962+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 578 } 2015-04-01T16:21:56.962+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 579 } 2015-04-01T16:21:56.963+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|603, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.964+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.966+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.966+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 580 } 2015-04-01T16:21:56.966+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 581 } 2015-04-01T16:21:56.967+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 582 } 2015-04-01T16:21:56.967+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|606, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.967+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.968+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.968+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 583 } 2015-04-01T16:21:56.969+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 584 } 2015-04-01T16:21:56.969+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|608, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.970+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.971+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.971+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 585 } 2015-04-01T16:21:56.971+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 586 } 2015-04-01T16:21:56.971+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|610, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.973+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.974+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.975+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 587 } 2015-04-01T16:21:56.975+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 588 } 2015-04-01T16:21:56.975+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|612, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.977+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:56.977+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.978+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:56.978+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 589 } 2015-04-01T16:21:56.978+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 590 } 2015-04-01T16:21:56.979+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 591 } 2015-04-01T16:21:56.979+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 592 } 2015-04-01T16:21:56.980+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|616, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.981+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:56.981+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:21:56.982+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.982+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:56.983+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:56.983+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 593 } 2015-04-01T16:21:56.983+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 594 } 2015-04-01T16:21:56.983+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 595 } 2015-04-01T16:21:56.983+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:21:58.983Z 2015-04-01T16:21:56.984+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 596 } 2015-04-01T16:21:56.985+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.985+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|620, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.986+0000 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:56.986+0000 D COMMAND [conn18] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:56.987+0000 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:56.987+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.987+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 597 } 2015-04-01T16:21:56.987+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 598 } 2015-04-01T16:21:56.987+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 599 } 2015-04-01T16:21:56.988+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|623, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.988+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.990+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.990+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 600 } 2015-04-01T16:21:56.991+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 601 } 2015-04-01T16:21:56.991+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|625, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.991+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.992+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:56.993+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 602 } 2015-04-01T16:21:56.993+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 603 } 2015-04-01T16:21:56.993+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 604 } 2015-04-01T16:21:56.994+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|628, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.995+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.995+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:56.995+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 605 } 2015-04-01T16:21:56.995+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|629, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:56.997+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:56.998+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:56.998+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:56.998+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 606 } 2015-04-01T16:21:56.999+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 607 } 2015-04-01T16:21:56.999+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905316000|631, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.000+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.002+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.002+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 608 } 2015-04-01T16:21:57.002+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 609 } 2015-04-01T16:21:57.002+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 610 } 2015-04-01T16:21:57.003+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.004+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.004+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.004+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 611 } 2015-04-01T16:21:57.004+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 612 } 2015-04-01T16:21:57.005+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.008+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.009+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.009+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 613 } 2015-04-01T16:21:57.009+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 614 } 2015-04-01T16:21:57.009+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 615 } 2015-04-01T16:21:57.010+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.011+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.011+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.011+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 616 } 2015-04-01T16:21:57.011+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 617 } 2015-04-01T16:21:57.012+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 618 } 2015-04-01T16:21:57.012+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.013+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.013+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.014+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 619 } 2015-04-01T16:21:57.014+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|10, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.016+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.017+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.017+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.017+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 620 } 2015-04-01T16:21:57.017+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 621 } 2015-04-01T16:21:57.018+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.019+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.020+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.020+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 622 } 2015-04-01T16:21:57.020+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 623 } 2015-04-01T16:21:57.021+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|14, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.023+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.023+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.023+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 624 } 2015-04-01T16:21:57.023+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 625 } 2015-04-01T16:21:57.024+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 626 } 2015-04-01T16:21:57.024+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 627 } 2015-04-01T16:21:57.024+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.027+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.028+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.029+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 628 } 2015-04-01T16:21:57.030+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.030+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.031+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.031+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 629 } 2015-04-01T16:21:57.031+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 630 } 2015-04-01T16:21:57.032+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.033+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.033+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.033+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 631 } 2015-04-01T16:21:57.033+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 632 } 2015-04-01T16:21:57.034+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|23, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.037+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.037+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.038+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 633 } 2015-04-01T16:21:57.038+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 634 } 2015-04-01T16:21:57.038+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 635 } 2015-04-01T16:21:57.039+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|26, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.039+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.040+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.040+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.040+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 636 } 2015-04-01T16:21:57.040+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 637 } 2015-04-01T16:21:57.041+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 638 } 2015-04-01T16:21:57.041+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|29, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.044+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.044+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.044+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 639 } 2015-04-01T16:21:57.045+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 640 } 2015-04-01T16:21:57.045+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|31, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.047+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.048+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.049+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 641 } 2015-04-01T16:21:57.049+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 642 } 2015-04-01T16:21:57.049+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 643 } 2015-04-01T16:21:57.049+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|34, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.050+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.051+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 644 } 2015-04-01T16:21:57.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 645 } 2015-04-01T16:21:57.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 646 } 2015-04-01T16:21:57.053+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|37, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.054+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.054+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 647 } 2015-04-01T16:21:57.055+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 648 } 2015-04-01T16:21:57.055+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 649 } 2015-04-01T16:21:57.055+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|40, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.057+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.057+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.057+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 650 } 2015-04-01T16:21:57.058+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|41, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.060+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.060+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.060+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.060+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 651 } 2015-04-01T16:21:57.060+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 652 } 2015-04-01T16:21:57.061+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 653 } 2015-04-01T16:21:57.061+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 654 } 2015-04-01T16:21:57.061+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|45, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.064+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.064+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.064+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 655 } 2015-04-01T16:21:57.064+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 656 } 2015-04-01T16:21:57.065+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|47, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.066+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.067+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.067+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 657 } 2015-04-01T16:21:57.068+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 658 } 2015-04-01T16:21:57.068+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 659 } 2015-04-01T16:21:57.068+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|50, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.070+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.071+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.072+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 660 } 2015-04-01T16:21:57.072+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 661 } 2015-04-01T16:21:57.072+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 662 } 2015-04-01T16:21:57.072+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|53, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.074+0000 D REPL [rsBackgroundSync] bgsync buffer has 297 bytes 2015-04-01T16:21:57.074+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.075+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.075+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 663 } 2015-04-01T16:21:57.075+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 664 } 2015-04-01T16:21:57.075+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 665 } 2015-04-01T16:21:57.076+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|56, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.076+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.077+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.077+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 666 } 2015-04-01T16:21:57.078+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|57, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.078+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.079+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.080+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 667 } 2015-04-01T16:21:57.080+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 668 } 2015-04-01T16:21:57.080+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 669 } 2015-04-01T16:21:57.080+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.080+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|60, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.081+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.081+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 670 } 2015-04-01T16:21:57.081+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|61, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.083+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.083+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.084+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 671 } 2015-04-01T16:21:57.084+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|62, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.087+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.087+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.087+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 672 } 2015-04-01T16:21:57.087+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 673 } 2015-04-01T16:21:57.088+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|64, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.090+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.091+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 674 } 2015-04-01T16:21:57.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 675 } 2015-04-01T16:21:57.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 676 } 2015-04-01T16:21:57.092+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|67, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.093+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.094+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.095+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 677 } 2015-04-01T16:21:57.095+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 678 } 2015-04-01T16:21:57.095+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 679 } 2015-04-01T16:21:57.095+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.096+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|70, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.096+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.098+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.099+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 680 } 2015-04-01T16:21:57.099+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 681 } 2015-04-01T16:21:57.100+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|72, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.101+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.101+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.101+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 682 } 2015-04-01T16:21:57.101+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 683 } 2015-04-01T16:21:57.101+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 684 } 2015-04-01T16:21:57.102+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|75, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.103+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.103+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.104+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 685 } 2015-04-01T16:21:57.104+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 686 } 2015-04-01T16:21:57.104+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|77, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.106+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.107+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.107+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 687 } 2015-04-01T16:21:57.107+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 688 } 2015-04-01T16:21:57.108+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 689 } 2015-04-01T16:21:57.108+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|80, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.109+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.111+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.111+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 690 } 2015-04-01T16:21:57.111+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 691 } 2015-04-01T16:21:57.111+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 692 } 2015-04-01T16:21:57.111+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|83, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.112+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.112+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.112+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 693 } 2015-04-01T16:21:57.113+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 694 } 2015-04-01T16:21:57.113+0000 D QUERY [rsSync] local.oplog.rs: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:57.113+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|85, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.115+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.115+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.116+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.117+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 695 } 2015-04-01T16:21:57.117+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 696 } 2015-04-01T16:21:57.117+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|87, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.119+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.120+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.121+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 697 } 2015-04-01T16:21:57.121+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 698 } 2015-04-01T16:21:57.121+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 699 } 2015-04-01T16:21:57.121+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 700 } 2015-04-01T16:21:57.122+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|91, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.123+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.124+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.124+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 701 } 2015-04-01T16:21:57.124+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 702 } 2015-04-01T16:21:57.124+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 703 } 2015-04-01T16:21:57.125+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|94, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.126+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.127+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.127+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 704 } 2015-04-01T16:21:57.128+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 705 } 2015-04-01T16:21:57.128+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 706 } 2015-04-01T16:21:57.128+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|97, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.130+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.131+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.131+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 707 } 2015-04-01T16:21:57.131+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 708 } 2015-04-01T16:21:57.131+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 709 } 2015-04-01T16:21:57.132+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|100, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.132+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.133+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.134+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.134+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 710 } 2015-04-01T16:21:57.135+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 711 } 2015-04-01T16:21:57.135+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 712 } 2015-04-01T16:21:57.136+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|103, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.136+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.137+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.137+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 713 } 2015-04-01T16:21:57.137+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 714 } 2015-04-01T16:21:57.137+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 715 } 2015-04-01T16:21:57.138+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|106, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.140+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.141+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.141+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 716 } 2015-04-01T16:21:57.142+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 717 } 2015-04-01T16:21:57.142+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 718 } 2015-04-01T16:21:57.143+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|109, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.143+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.144+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.144+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 719 } 2015-04-01T16:21:57.144+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 720 } 2015-04-01T16:21:57.144+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 721 } 2015-04-01T16:21:57.144+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|112, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.146+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.147+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.147+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 722 } 2015-04-01T16:21:57.147+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 723 } 2015-04-01T16:21:57.148+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|114, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.149+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:57.150+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.150+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.150+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 724 } 2015-04-01T16:21:57.151+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 725 } 2015-04-01T16:21:57.151+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 726 } 2015-04-01T16:21:57.151+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|117, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.153+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.154+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.154+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 727 } 2015-04-01T16:21:57.155+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 728 } 2015-04-01T16:21:57.155+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 729 } 2015-04-01T16:21:57.155+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|120, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.157+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.158+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 730 } 2015-04-01T16:21:57.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 731 } 2015-04-01T16:21:57.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 732 } 2015-04-01T16:21:57.159+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|123, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.161+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.162+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.162+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 733 } 2015-04-01T16:21:57.162+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 734 } 2015-04-01T16:21:57.162+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 735 } 2015-04-01T16:21:57.163+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|126, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.163+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.164+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 736 } 2015-04-01T16:21:57.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 737 } 2015-04-01T16:21:57.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 738 } 2015-04-01T16:21:57.165+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|129, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.166+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.166+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 739 } 2015-04-01T16:21:57.167+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|130, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.169+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.169+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.169+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 740 } 2015-04-01T16:21:57.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 741 } 2015-04-01T16:21:57.170+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|132, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.172+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.172+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 742 } 2015-04-01T16:21:57.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 743 } 2015-04-01T16:21:57.174+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|134, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.175+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.176+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 744 } 2015-04-01T16:21:57.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 745 } 2015-04-01T16:21:57.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 746 } 2015-04-01T16:21:57.177+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|137, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.179+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.179+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.179+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 747 } 2015-04-01T16:21:57.180+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 748 } 2015-04-01T16:21:57.180+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 749 } 2015-04-01T16:21:57.180+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|140, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.181+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.181+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.182+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 750 } 2015-04-01T16:21:57.182+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 751 } 2015-04-01T16:21:57.182+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|142, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.184+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.184+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.184+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 752 } 2015-04-01T16:21:57.185+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|143, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.187+0000 D REPL [rsBackgroundSync] bgsync buffer has 297 bytes 2015-04-01T16:21:57.188+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.189+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.189+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 753 } 2015-04-01T16:21:57.189+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 754 } 2015-04-01T16:21:57.189+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 755 } 2015-04-01T16:21:57.189+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 756 } 2015-04-01T16:21:57.190+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|147, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.191+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.192+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 757 } 2015-04-01T16:21:57.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 758 } 2015-04-01T16:21:57.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 759 } 2015-04-01T16:21:57.193+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|150, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.193+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.194+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 760 } 2015-04-01T16:21:57.194+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|151, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.196+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.197+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.197+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 761 } 2015-04-01T16:21:57.197+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 762 } 2015-04-01T16:21:57.197+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|153, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.200+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.201+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.201+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 763 } 2015-04-01T16:21:57.202+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 764 } 2015-04-01T16:21:57.202+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 765 } 2015-04-01T16:21:57.202+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 766 } 2015-04-01T16:21:57.203+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|157, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.203+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.204+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 767 } 2015-04-01T16:21:57.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 768 } 2015-04-01T16:21:57.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 769 } 2015-04-01T16:21:57.205+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|160, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.207+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.207+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.208+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 770 } 2015-04-01T16:21:57.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 771 } 2015-04-01T16:21:57.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 772 } 2015-04-01T16:21:57.209+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|163, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.211+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.212+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.212+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 773 } 2015-04-01T16:21:57.212+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 774 } 2015-04-01T16:21:57.212+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 775 } 2015-04-01T16:21:57.212+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 776 } 2015-04-01T16:21:57.213+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|167, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.214+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.214+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.215+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 777 } 2015-04-01T16:21:57.215+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|168, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.218+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.218+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 778 } 2015-04-01T16:21:57.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 779 } 2015-04-01T16:21:57.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 780 } 2015-04-01T16:21:57.220+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|171, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.220+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.221+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.222+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 781 } 2015-04-01T16:21:57.222+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 782 } 2015-04-01T16:21:57.222+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 783 } 2015-04-01T16:21:57.222+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|174, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.224+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.225+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.225+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 784 } 2015-04-01T16:21:57.225+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 785 } 2015-04-01T16:21:57.226+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|176, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.227+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.227+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.228+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.228+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 786 } 2015-04-01T16:21:57.229+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 787 } 2015-04-01T16:21:57.229+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 788 } 2015-04-01T16:21:57.229+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|179, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.231+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.232+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.232+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 789 } 2015-04-01T16:21:57.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 790 } 2015-04-01T16:21:57.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 791 } 2015-04-01T16:21:57.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 792 } 2015-04-01T16:21:57.234+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|183, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.235+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.235+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.236+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 793 } 2015-04-01T16:21:57.236+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 794 } 2015-04-01T16:21:57.236+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 795 } 2015-04-01T16:21:57.236+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|186, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.238+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.239+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.239+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 796 } 2015-04-01T16:21:57.240+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 797 } 2015-04-01T16:21:57.240+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|188, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.241+0000 D REPL [rsBackgroundSync] bgsync buffer has 297 bytes 2015-04-01T16:21:57.241+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.242+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.242+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 798 } 2015-04-01T16:21:57.243+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 799 } 2015-04-01T16:21:57.243+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 800 } 2015-04-01T16:21:57.243+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 801 } 2015-04-01T16:21:57.244+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|192, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.245+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.246+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.246+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 802 } 2015-04-01T16:21:57.246+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 803 } 2015-04-01T16:21:57.246+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 804 } 2015-04-01T16:21:57.247+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|195, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.248+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.249+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.249+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 805 } 2015-04-01T16:21:57.250+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 806 } 2015-04-01T16:21:57.250+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 807 } 2015-04-01T16:21:57.250+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|198, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.251+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.252+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.252+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 808 } 2015-04-01T16:21:57.253+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 809 } 2015-04-01T16:21:57.253+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|200, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.255+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.256+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.256+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 810 } 2015-04-01T16:21:57.256+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 811 } 2015-04-01T16:21:57.257+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 812 } 2015-04-01T16:21:57.257+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|203, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.259+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.259+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.260+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 813 } 2015-04-01T16:21:57.260+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 814 } 2015-04-01T16:21:57.260+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 815 } 2015-04-01T16:21:57.261+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|206, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.262+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.262+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.263+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.263+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 816 } 2015-04-01T16:21:57.263+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 817 } 2015-04-01T16:21:57.263+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 818 } 2015-04-01T16:21:57.263+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 819 } 2015-04-01T16:21:57.264+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|210, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.265+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.266+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.266+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 820 } 2015-04-01T16:21:57.267+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 821 } 2015-04-01T16:21:57.267+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|212, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.270+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.270+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.270+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 822 } 2015-04-01T16:21:57.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 823 } 2015-04-01T16:21:57.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 824 } 2015-04-01T16:21:57.272+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.272+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|215, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.274+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.274+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 825 } 2015-04-01T16:21:57.274+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 826 } 2015-04-01T16:21:57.274+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 827 } 2015-04-01T16:21:57.275+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.275+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|218, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.276+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.276+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 828 } 2015-04-01T16:21:57.276+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 829 } 2015-04-01T16:21:57.277+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.277+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|220, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.278+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.279+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.279+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 830 } 2015-04-01T16:21:57.279+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 831 } 2015-04-01T16:21:57.279+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|222, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.281+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.283+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.283+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 832 } 2015-04-01T16:21:57.283+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 833 } 2015-04-01T16:21:57.283+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 834 } 2015-04-01T16:21:57.283+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|225, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.284+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.285+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.285+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 835 } 2015-04-01T16:21:57.285+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 836 } 2015-04-01T16:21:57.285+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 837 } 2015-04-01T16:21:57.285+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|228, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.287+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.287+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.288+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 838 } 2015-04-01T16:21:57.288+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|229, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.290+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.291+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.291+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 839 } 2015-04-01T16:21:57.291+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 840 } 2015-04-01T16:21:57.292+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 841 } 2015-04-01T16:21:57.292+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|232, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.293+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.295+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.295+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 842 } 2015-04-01T16:21:57.295+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 843 } 2015-04-01T16:21:57.295+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 844 } 2015-04-01T16:21:57.296+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|235, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.297+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.297+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.298+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.299+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 845 } 2015-04-01T16:21:57.299+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 846 } 2015-04-01T16:21:57.299+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 847 } 2015-04-01T16:21:57.300+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|238, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.301+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.302+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.302+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 848 } 2015-04-01T16:21:57.302+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 849 } 2015-04-01T16:21:57.302+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 850 } 2015-04-01T16:21:57.303+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|241, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.304+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.305+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.305+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 851 } 2015-04-01T16:21:57.306+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 852 } 2015-04-01T16:21:57.306+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 853 } 2015-04-01T16:21:57.306+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|244, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.308+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.308+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.309+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 854 } 2015-04-01T16:21:57.309+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 855 } 2015-04-01T16:21:57.310+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 856 } 2015-04-01T16:21:57.310+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|247, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.311+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.312+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.312+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 857 } 2015-04-01T16:21:57.313+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 858 } 2015-04-01T16:21:57.313+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 859 } 2015-04-01T16:21:57.313+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|250, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.314+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.315+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.316+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.316+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 860 } 2015-04-01T16:21:57.316+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 861 } 2015-04-01T16:21:57.317+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 862 } 2015-04-01T16:21:57.317+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|253, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.318+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.319+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.320+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 863 } 2015-04-01T16:21:57.320+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 864 } 2015-04-01T16:21:57.320+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 865 } 2015-04-01T16:21:57.320+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|256, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.320+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.322+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.322+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 866 } 2015-04-01T16:21:57.322+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 867 } 2015-04-01T16:21:57.322+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|258, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.323+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.324+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.324+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 868 } 2015-04-01T16:21:57.324+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|259, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.327+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.327+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.327+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 869 } 2015-04-01T16:21:57.327+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 870 } 2015-04-01T16:21:57.328+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|261, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.329+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.331+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.331+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 871 } 2015-04-01T16:21:57.331+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 872 } 2015-04-01T16:21:57.331+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 873 } 2015-04-01T16:21:57.332+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|264, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.333+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:57.333+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.333+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.333+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 874 } 2015-04-01T16:21:57.334+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 875 } 2015-04-01T16:21:57.334+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 876 } 2015-04-01T16:21:57.335+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|267, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.337+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.337+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.338+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 877 } 2015-04-01T16:21:57.338+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 878 } 2015-04-01T16:21:57.338+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 879 } 2015-04-01T16:21:57.338+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|270, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.340+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.341+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.342+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 880 } 2015-04-01T16:21:57.342+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 881 } 2015-04-01T16:21:57.342+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 882 } 2015-04-01T16:21:57.343+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|273, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.344+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.345+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.346+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 883 } 2015-04-01T16:21:57.346+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 884 } 2015-04-01T16:21:57.346+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 885 } 2015-04-01T16:21:57.346+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|276, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.347+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.348+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.348+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 886 } 2015-04-01T16:21:57.349+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 887 } 2015-04-01T16:21:57.349+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 888 } 2015-04-01T16:21:57.349+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 889 } 2015-04-01T16:21:57.349+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|280, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.350+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.351+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.352+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.352+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 890 } 2015-04-01T16:21:57.352+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 891 } 2015-04-01T16:21:57.353+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|282, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.355+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.356+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.356+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 892 } 2015-04-01T16:21:57.356+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 893 } 2015-04-01T16:21:57.356+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 894 } 2015-04-01T16:21:57.357+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|285, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.358+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.359+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.359+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 895 } 2015-04-01T16:21:57.359+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 896 } 2015-04-01T16:21:57.360+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 897 } 2015-04-01T16:21:57.360+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|288, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.362+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.362+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.363+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 898 } 2015-04-01T16:21:57.363+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 899 } 2015-04-01T16:21:57.363+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 900 } 2015-04-01T16:21:57.363+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|291, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.366+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.367+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.367+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 901 } 2015-04-01T16:21:57.367+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 902 } 2015-04-01T16:21:57.368+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 903 } 2015-04-01T16:21:57.368+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|294, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.369+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:57.370+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.371+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.371+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 904 } 2015-04-01T16:21:57.371+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 905 } 2015-04-01T16:21:57.371+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 906 } 2015-04-01T16:21:57.372+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 907 } 2015-04-01T16:21:57.372+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|298, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.373+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.374+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.374+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 908 } 2015-04-01T16:21:57.374+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 909 } 2015-04-01T16:21:57.375+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 910 } 2015-04-01T16:21:57.375+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 911 } 2015-04-01T16:21:57.375+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|302, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.376+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.377+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.377+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 912 } 2015-04-01T16:21:57.378+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 913 } 2015-04-01T16:21:57.378+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|304, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.378+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.379+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.379+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 914 } 2015-04-01T16:21:57.380+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 915 } 2015-04-01T16:21:57.380+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|306, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.382+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.383+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 916 } 2015-04-01T16:21:57.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 917 } 2015-04-01T16:21:57.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 918 } 2015-04-01T16:21:57.385+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|309, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.386+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:57.386+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.387+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.388+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 919 } 2015-04-01T16:21:57.388+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 920 } 2015-04-01T16:21:57.388+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 921 } 2015-04-01T16:21:57.388+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|312, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.389+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.389+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.389+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 922 } 2015-04-01T16:21:57.390+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 923 } 2015-04-01T16:21:57.390+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 924 } 2015-04-01T16:21:57.390+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|315, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.392+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.393+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.393+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 925 } 2015-04-01T16:21:57.393+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 926 } 2015-04-01T16:21:57.393+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|317, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.395+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.397+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.397+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 927 } 2015-04-01T16:21:57.397+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 928 } 2015-04-01T16:21:57.397+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 929 } 2015-04-01T16:21:57.398+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|320, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.398+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.399+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.400+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 930 } 2015-04-01T16:21:57.400+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 931 } 2015-04-01T16:21:57.400+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 932 } 2015-04-01T16:21:57.401+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|323, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.401+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.402+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.402+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 933 } 2015-04-01T16:21:57.403+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 934 } 2015-04-01T16:21:57.403+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|325, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.404+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.404+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.405+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.406+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 935 } 2015-04-01T16:21:57.406+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 936 } 2015-04-01T16:21:57.406+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|327, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.408+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.409+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.409+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 937 } 2015-04-01T16:21:57.410+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 938 } 2015-04-01T16:21:57.410+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 939 } 2015-04-01T16:21:57.410+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 940 } 2015-04-01T16:21:57.411+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|331, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.412+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.413+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.413+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 941 } 2015-04-01T16:21:57.413+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 942 } 2015-04-01T16:21:57.414+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 943 } 2015-04-01T16:21:57.414+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|334, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.415+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.416+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.416+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 944 } 2015-04-01T16:21:57.417+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 945 } 2015-04-01T16:21:57.417+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 946 } 2015-04-01T16:21:57.418+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|337, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.418+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.419+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.419+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 947 } 2015-04-01T16:21:57.420+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 948 } 2015-04-01T16:21:57.420+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|339, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.420+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.420+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.421+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 949 } 2015-04-01T16:21:57.421+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 950 } 2015-04-01T16:21:57.421+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|341, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.423+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.423+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.424+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.424+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 951 } 2015-04-01T16:21:57.424+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 952 } 2015-04-01T16:21:57.424+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|343, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.427+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.427+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.428+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 953 } 2015-04-01T16:21:57.430+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|344, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.430+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.430+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.430+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 954 } 2015-04-01T16:21:57.430+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 955 } 2015-04-01T16:21:57.431+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 956 } 2015-04-01T16:21:57.431+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|347, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.432+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.433+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.433+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 957 } 2015-04-01T16:21:57.433+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|348, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.436+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.437+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.437+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 958 } 2015-04-01T16:21:57.438+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 959 } 2015-04-01T16:21:57.438+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 960 } 2015-04-01T16:21:57.438+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|351, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.438+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.440+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.440+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 961 } 2015-04-01T16:21:57.440+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 962 } 2015-04-01T16:21:57.440+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 963 } 2015-04-01T16:21:57.441+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|354, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.442+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:57.443+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.444+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.444+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 964 } 2015-04-01T16:21:57.444+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 965 } 2015-04-01T16:21:57.444+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 966 } 2015-04-01T16:21:57.445+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|357, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.446+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.447+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.447+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 967 } 2015-04-01T16:21:57.447+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 968 } 2015-04-01T16:21:57.447+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 969 } 2015-04-01T16:21:57.449+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|360, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.450+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.451+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.451+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 970 } 2015-04-01T16:21:57.451+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 971 } 2015-04-01T16:21:57.452+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 972 } 2015-04-01T16:21:57.452+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 973 } 2015-04-01T16:21:57.452+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|364, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.454+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.455+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.455+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 974 } 2015-04-01T16:21:57.455+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 975 } 2015-04-01T16:21:57.456+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 976 } 2015-04-01T16:21:57.456+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 977 } 2015-04-01T16:21:57.457+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|368, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.457+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.458+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.458+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 978 } 2015-04-01T16:21:57.458+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 979 } 2015-04-01T16:21:57.459+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 980 } 2015-04-01T16:21:57.459+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|371, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.460+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.460+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.460+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.461+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 981 } 2015-04-01T16:21:57.461+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 982 } 2015-04-01T16:21:57.462+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|373, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.463+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.464+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.464+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 983 } 2015-04-01T16:21:57.464+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 984 } 2015-04-01T16:21:57.464+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 985 } 2015-04-01T16:21:57.464+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|376, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.469+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.470+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.470+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 986 } 2015-04-01T16:21:57.470+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 987 } 2015-04-01T16:21:57.470+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|378, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.472+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.473+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.474+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 988 } 2015-04-01T16:21:57.474+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 989 } 2015-04-01T16:21:57.474+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 990 } 2015-04-01T16:21:57.475+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|381, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.476+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.477+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.477+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 991 } 2015-04-01T16:21:57.477+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 992 } 2015-04-01T16:21:57.477+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 993 } 2015-04-01T16:21:57.478+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|384, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.479+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:57.479+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.480+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.480+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 994 } 2015-04-01T16:21:57.481+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 995 } 2015-04-01T16:21:57.481+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 996 } 2015-04-01T16:21:57.481+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 997 } 2015-04-01T16:21:57.482+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|388, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.483+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.484+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.484+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 998 } 2015-04-01T16:21:57.485+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 999 } 2015-04-01T16:21:57.485+0000 D QUERY [repl writer worker 15] Tests04011621.testcollection: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:57.485+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1000 } 2015-04-01T16:21:57.486+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|391, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.487+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.488+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.489+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1001 } 2015-04-01T16:21:57.489+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1002 } 2015-04-01T16:21:57.489+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1003 } 2015-04-01T16:21:57.489+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1004 } 2015-04-01T16:21:57.490+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|395, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.491+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.491+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.492+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1005 } 2015-04-01T16:21:57.492+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1006 } 2015-04-01T16:21:57.492+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1007 } 2015-04-01T16:21:57.493+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|398, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.494+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.495+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.495+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1008 } 2015-04-01T16:21:57.495+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1009 } 2015-04-01T16:21:57.495+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1010 } 2015-04-01T16:21:57.495+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|401, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.496+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.496+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.497+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.497+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1011 } 2015-04-01T16:21:57.497+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|402, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.499+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.500+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.500+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1012 } 2015-04-01T16:21:57.501+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1013 } 2015-04-01T16:21:57.501+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1014 } 2015-04-01T16:21:57.501+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|405, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.503+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.505+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.505+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1015 } 2015-04-01T16:21:57.505+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1016 } 2015-04-01T16:21:57.505+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1017 } 2015-04-01T16:21:57.505+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1018 } 2015-04-01T16:21:57.506+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|409, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.507+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.508+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.508+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1019 } 2015-04-01T16:21:57.509+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1020 } 2015-04-01T16:21:57.509+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1021 } 2015-04-01T16:21:57.510+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|412, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.511+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.512+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.512+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1022 } 2015-04-01T16:21:57.512+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1023 } 2015-04-01T16:21:57.512+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1024 } 2015-04-01T16:21:57.513+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|415, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.514+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.514+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.515+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.515+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1025 } 2015-04-01T16:21:57.516+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1026 } 2015-04-01T16:21:57.516+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1027 } 2015-04-01T16:21:57.516+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|418, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.518+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.519+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.519+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1028 } 2015-04-01T16:21:57.520+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1029 } 2015-04-01T16:21:57.520+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1030 } 2015-04-01T16:21:57.520+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1031 } 2015-04-01T16:21:57.521+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|422, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.522+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.522+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.523+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1032 } 2015-04-01T16:21:57.523+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1033 } 2015-04-01T16:21:57.523+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1034 } 2015-04-01T16:21:57.524+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|425, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.525+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.527+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.527+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1035 } 2015-04-01T16:21:57.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1036 } 2015-04-01T16:21:57.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1037 } 2015-04-01T16:21:57.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1038 } 2015-04-01T16:21:57.528+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:57.528+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|429, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.529+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.531+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.531+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1039 } 2015-04-01T16:21:57.531+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1040 } 2015-04-01T16:21:57.531+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1041 } 2015-04-01T16:21:57.532+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|432, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.533+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.534+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.534+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1042 } 2015-04-01T16:21:57.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1043 } 2015-04-01T16:21:57.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1044 } 2015-04-01T16:21:57.535+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.535+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|435, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.536+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.536+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1045 } 2015-04-01T16:21:57.536+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1046 } 2015-04-01T16:21:57.536+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1047 } 2015-04-01T16:21:57.536+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|438, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.537+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.538+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.538+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1048 } 2015-04-01T16:21:57.538+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|439, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.540+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.541+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.541+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1049 } 2015-04-01T16:21:57.541+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1050 } 2015-04-01T16:21:57.541+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1051 } 2015-04-01T16:21:57.542+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|442, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.543+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.543+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.543+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1052 } 2015-04-01T16:21:57.544+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1053 } 2015-04-01T16:21:57.544+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|444, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.546+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.546+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.546+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1054 } 2015-04-01T16:21:57.547+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1055 } 2015-04-01T16:21:57.547+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|446, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.549+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.549+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.550+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.551+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1056 } 2015-04-01T16:21:57.551+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1057 } 2015-04-01T16:21:57.551+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1058 } 2015-04-01T16:21:57.551+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|449, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.552+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.552+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.552+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1059 } 2015-04-01T16:21:57.553+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1060 } 2015-04-01T16:21:57.553+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|451, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.555+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.556+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.556+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1061 } 2015-04-01T16:21:57.556+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1062 } 2015-04-01T16:21:57.557+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1063 } 2015-04-01T16:21:57.557+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|454, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.559+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.560+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.560+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1064 } 2015-04-01T16:21:57.560+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1065 } 2015-04-01T16:21:57.560+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1066 } 2015-04-01T16:21:57.560+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|457, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.562+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.562+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.563+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1067 } 2015-04-01T16:21:57.563+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1068 } 2015-04-01T16:21:57.563+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1069 } 2015-04-01T16:21:57.563+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1070 } 2015-04-01T16:21:57.563+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|461, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.566+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.566+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.566+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.566+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1071 } 2015-04-01T16:21:57.566+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1072 } 2015-04-01T16:21:57.567+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|463, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.569+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.570+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.570+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1073 } 2015-04-01T16:21:57.570+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1074 } 2015-04-01T16:21:57.570+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1075 } 2015-04-01T16:21:57.570+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|466, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.572+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.572+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.572+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1076 } 2015-04-01T16:21:57.573+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1077 } 2015-04-01T16:21:57.573+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|468, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.576+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.578+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.578+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1078 } 2015-04-01T16:21:57.578+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1079 } 2015-04-01T16:21:57.578+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1080 } 2015-04-01T16:21:57.578+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1081 } 2015-04-01T16:21:57.579+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|472, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.579+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.580+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.581+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:57.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1082 } 2015-04-01T16:21:57.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1083 } 2015-04-01T16:21:57.581+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|474, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.582+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.583+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.583+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1084 } 2015-04-01T16:21:57.583+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1085 } 2015-04-01T16:21:57.583+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1086 } 2015-04-01T16:21:57.584+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|477, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.585+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.586+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.586+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1087 } 2015-04-01T16:21:57.587+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1088 } 2015-04-01T16:21:57.587+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1089 } 2015-04-01T16:21:57.587+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1090 } 2015-04-01T16:21:57.587+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|481, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.588+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.590+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.590+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1091 } 2015-04-01T16:21:57.590+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1092 } 2015-04-01T16:21:57.590+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1093 } 2015-04-01T16:21:57.591+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|484, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.591+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.592+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.592+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1094 } 2015-04-01T16:21:57.592+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1095 } 2015-04-01T16:21:57.593+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1096 } 2015-04-01T16:21:57.593+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|487, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.597+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.597+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.598+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1097 } 2015-04-01T16:21:57.598+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1098 } 2015-04-01T16:21:57.598+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1099 } 2015-04-01T16:21:57.598+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|490, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.600+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.600+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.602+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.602+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1100 } 2015-04-01T16:21:57.602+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1101 } 2015-04-01T16:21:57.602+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1102 } 2015-04-01T16:21:57.603+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1103 } 2015-04-01T16:21:57.603+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|494, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.603+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.604+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.604+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1104 } 2015-04-01T16:21:57.604+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1105 } 2015-04-01T16:21:57.604+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|496, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.606+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.607+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.607+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1106 } 2015-04-01T16:21:57.607+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|497, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.610+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.610+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.610+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1107 } 2015-04-01T16:21:57.610+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1108 } 2015-04-01T16:21:57.611+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1109 } 2015-04-01T16:21:57.611+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|500, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.612+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.613+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.613+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1110 } 2015-04-01T16:21:57.614+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1111 } 2015-04-01T16:21:57.614+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1112 } 2015-04-01T16:21:57.615+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|503, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.616+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.618+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.618+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1113 } 2015-04-01T16:21:57.619+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1114 } 2015-04-01T16:21:57.619+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1115 } 2015-04-01T16:21:57.619+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.619+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|506, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.620+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.621+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.622+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1116 } 2015-04-01T16:21:57.622+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1117 } 2015-04-01T16:21:57.622+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1118 } 2015-04-01T16:21:57.623+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|509, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.623+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.624+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.625+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1119 } 2015-04-01T16:21:57.625+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1120 } 2015-04-01T16:21:57.625+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|511, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.626+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.627+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.628+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1121 } 2015-04-01T16:21:57.628+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1122 } 2015-04-01T16:21:57.628+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1123 } 2015-04-01T16:21:57.629+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|514, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.630+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.631+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.631+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1124 } 2015-04-01T16:21:57.632+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1125 } 2015-04-01T16:21:57.632+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1126 } 2015-04-01T16:21:57.632+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|517, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.634+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.635+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.635+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1127 } 2015-04-01T16:21:57.635+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1128 } 2015-04-01T16:21:57.636+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1129 } 2015-04-01T16:21:57.636+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.637+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.637+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|520, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.638+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.638+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1130 } 2015-04-01T16:21:57.638+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1131 } 2015-04-01T16:21:57.639+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1132 } 2015-04-01T16:21:57.639+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|523, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.640+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.641+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.641+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1133 } 2015-04-01T16:21:57.642+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1134 } 2015-04-01T16:21:57.642+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|525, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.642+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.644+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.644+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1135 } 2015-04-01T16:21:57.644+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1136 } 2015-04-01T16:21:57.645+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|527, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.646+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.646+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.646+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1137 } 2015-04-01T16:21:57.647+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1138 } 2015-04-01T16:21:57.647+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1139 } 2015-04-01T16:21:57.647+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|530, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.649+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.649+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.649+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1140 } 2015-04-01T16:21:57.650+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|531, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.653+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.654+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.654+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1141 } 2015-04-01T16:21:57.654+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1142 } 2015-04-01T16:21:57.654+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1143 } 2015-04-01T16:21:57.655+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1144 } 2015-04-01T16:21:57.655+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.656+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|535, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.656+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.657+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.657+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1145 } 2015-04-01T16:21:57.657+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1146 } 2015-04-01T16:21:57.658+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|537, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.659+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.660+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1147 } 2015-04-01T16:21:57.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1148 } 2015-04-01T16:21:57.661+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1149 } 2015-04-01T16:21:57.661+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|540, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.664+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.664+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.665+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1150 } 2015-04-01T16:21:57.665+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1151 } 2015-04-01T16:21:57.665+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1152 } 2015-04-01T16:21:57.665+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1153 } 2015-04-01T16:21:57.666+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|544, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.667+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.668+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.668+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1154 } 2015-04-01T16:21:57.668+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1155 } 2015-04-01T16:21:57.669+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1156 } 2015-04-01T16:21:57.669+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|547, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.671+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.672+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.672+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1157 } 2015-04-01T16:21:57.672+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1158 } 2015-04-01T16:21:57.672+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1159 } 2015-04-01T16:21:57.672+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1160 } 2015-04-01T16:21:57.673+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|551, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.674+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.675+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.675+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1161 } 2015-04-01T16:21:57.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1162 } 2015-04-01T16:21:57.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1163 } 2015-04-01T16:21:57.676+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|554, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.678+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.678+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.679+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1164 } 2015-04-01T16:21:57.679+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1165 } 2015-04-01T16:21:57.680+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|556, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.680+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.681+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1166 } 2015-04-01T16:21:57.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1167 } 2015-04-01T16:21:57.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1168 } 2015-04-01T16:21:57.682+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|559, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.684+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.685+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.685+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1169 } 2015-04-01T16:21:57.685+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1170 } 2015-04-01T16:21:57.686+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|561, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.688+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.688+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.689+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1171 } 2015-04-01T16:21:57.689+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1172 } 2015-04-01T16:21:57.689+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1173 } 2015-04-01T16:21:57.690+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|564, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.691+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:57.691+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.692+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.693+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1174 } 2015-04-01T16:21:57.693+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1175 } 2015-04-01T16:21:57.693+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1176 } 2015-04-01T16:21:57.694+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|567, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.696+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.701+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.701+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1177 } 2015-04-01T16:21:57.702+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1178 } 2015-04-01T16:21:57.702+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1179 } 2015-04-01T16:21:57.702+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1180 } 2015-04-01T16:21:57.703+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.703+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|571, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.704+0000 D REPL [rsSync] replication batch size is 6 2015-04-01T16:21:57.704+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1181 } 2015-04-01T16:21:57.704+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1182 } 2015-04-01T16:21:57.705+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1183 } 2015-04-01T16:21:57.705+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1184 } 2015-04-01T16:21:57.705+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1185 } 2015-04-01T16:21:57.705+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1186 } 2015-04-01T16:21:57.706+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|577, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.706+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.708+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:57.708+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.708+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1187 } 2015-04-01T16:21:57.708+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1188 } 2015-04-01T16:21:57.710+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|579, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.710+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.711+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.711+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1189 } 2015-04-01T16:21:57.712+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1190 } 2015-04-01T16:21:57.712+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1191 } 2015-04-01T16:21:57.712+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.713+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|582, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.715+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.715+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1192 } 2015-04-01T16:21:57.715+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1193 } 2015-04-01T16:21:57.715+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1194 } 2015-04-01T16:21:57.715+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|585, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.716+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.716+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.716+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1195 } 2015-04-01T16:21:57.716+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1196 } 2015-04-01T16:21:57.716+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1197 } 2015-04-01T16:21:57.717+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|588, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.717+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.718+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.718+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1198 } 2015-04-01T16:21:57.718+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|589, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.721+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.721+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.722+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1199 } 2015-04-01T16:21:57.722+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1200 } 2015-04-01T16:21:57.722+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|591, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.724+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.725+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.725+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1201 } 2015-04-01T16:21:57.726+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1202 } 2015-04-01T16:21:57.726+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1203 } 2015-04-01T16:21:57.726+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|594, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.727+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:57.728+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.729+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.729+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1204 } 2015-04-01T16:21:57.729+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1205 } 2015-04-01T16:21:57.729+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1206 } 2015-04-01T16:21:57.730+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|597, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.731+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.732+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.733+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1207 } 2015-04-01T16:21:57.733+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1208 } 2015-04-01T16:21:57.733+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1209 } 2015-04-01T16:21:57.733+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|600, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.735+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.735+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.735+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1210 } 2015-04-01T16:21:57.735+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1211 } 2015-04-01T16:21:57.736+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1212 } 2015-04-01T16:21:57.736+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|603, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.737+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.738+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.738+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1213 } 2015-04-01T16:21:57.739+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|604, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.740+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.740+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.741+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1214 } 2015-04-01T16:21:57.741+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1215 } 2015-04-01T16:21:57.742+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|606, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.743+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.744+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.745+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1216 } 2015-04-01T16:21:57.745+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1217 } 2015-04-01T16:21:57.745+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1218 } 2015-04-01T16:21:57.746+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|609, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.747+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:57.747+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.748+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.748+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1219 } 2015-04-01T16:21:57.748+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1220 } 2015-04-01T16:21:57.748+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1221 } 2015-04-01T16:21:57.749+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|612, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.750+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.751+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.751+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1222 } 2015-04-01T16:21:57.751+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1223 } 2015-04-01T16:21:57.751+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1224 } 2015-04-01T16:21:57.752+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|615, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.753+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.754+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.754+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1225 } 2015-04-01T16:21:57.755+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1226 } 2015-04-01T16:21:57.755+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|617, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.757+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.757+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.757+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1227 } 2015-04-01T16:21:57.757+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1228 } 2015-04-01T16:21:57.758+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|619, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.759+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.760+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.760+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1229 } 2015-04-01T16:21:57.760+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1230 } 2015-04-01T16:21:57.760+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|621, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.762+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.763+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.763+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1231 } 2015-04-01T16:21:57.764+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1232 } 2015-04-01T16:21:57.764+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1233 } 2015-04-01T16:21:57.764+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|624, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.765+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.765+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.766+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1234 } 2015-04-01T16:21:57.766+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1235 } 2015-04-01T16:21:57.766+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|626, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.768+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.768+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.769+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.769+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1236 } 2015-04-01T16:21:57.769+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1237 } 2015-04-01T16:21:57.770+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1238 } 2015-04-01T16:21:57.770+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|629, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.771+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.772+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.772+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1239 } 2015-04-01T16:21:57.772+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1240 } 2015-04-01T16:21:57.772+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1241 } 2015-04-01T16:21:57.772+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|632, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.774+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.775+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.775+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1242 } 2015-04-01T16:21:57.775+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1243 } 2015-04-01T16:21:57.776+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|634, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.777+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.777+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.778+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1244 } 2015-04-01T16:21:57.778+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1245 } 2015-04-01T16:21:57.778+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|636, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.780+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.781+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.782+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1246 } 2015-04-01T16:21:57.782+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1247 } 2015-04-01T16:21:57.782+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1248 } 2015-04-01T16:21:57.782+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|639, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.783+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.784+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.784+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1249 } 2015-04-01T16:21:57.785+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1250 } 2015-04-01T16:21:57.785+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|641, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.786+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.786+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.787+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.788+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1251 } 2015-04-01T16:21:57.788+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1252 } 2015-04-01T16:21:57.788+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1253 } 2015-04-01T16:21:57.788+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|644, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.789+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.789+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.789+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1254 } 2015-04-01T16:21:57.790+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|645, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.792+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.793+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.793+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1255 } 2015-04-01T16:21:57.793+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1256 } 2015-04-01T16:21:57.793+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1257 } 2015-04-01T16:21:57.794+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|648, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.795+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.796+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.796+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1258 } 2015-04-01T16:21:57.796+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1259 } 2015-04-01T16:21:57.797+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|650, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.798+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.798+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.798+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1260 } 2015-04-01T16:21:57.798+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1261 } 2015-04-01T16:21:57.799+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|652, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.801+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.802+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.802+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1262 } 2015-04-01T16:21:57.802+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1263 } 2015-04-01T16:21:57.803+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1264 } 2015-04-01T16:21:57.803+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|655, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.804+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.804+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.805+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.805+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1265 } 2015-04-01T16:21:57.806+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1266 } 2015-04-01T16:21:57.806+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|657, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.807+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.808+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.808+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1267 } 2015-04-01T16:21:57.809+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1268 } 2015-04-01T16:21:57.809+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:57.809+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 3:164b9000 2015-04-01T16:21:57.810+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|659, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.812+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.812+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.813+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1269 } 2015-04-01T16:21:57.813+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1270 } 2015-04-01T16:21:57.813+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1271 } 2015-04-01T16:21:57.814+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1272 } 2015-04-01T16:21:57.814+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.815+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|663, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.815+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.815+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1273 } 2015-04-01T16:21:57.815+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1274 } 2015-04-01T16:21:57.815+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|665, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.818+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.819+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.819+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1275 } 2015-04-01T16:21:57.819+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1276 } 2015-04-01T16:21:57.820+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1277 } 2015-04-01T16:21:57.821+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|668, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.822+0000 D REPL [rsBackgroundSync] bgsync buffer has 297 bytes 2015-04-01T16:21:57.822+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.823+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.824+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1278 } 2015-04-01T16:21:57.824+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1279 } 2015-04-01T16:21:57.824+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1280 } 2015-04-01T16:21:57.824+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1281 } 2015-04-01T16:21:57.825+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|672, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.827+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.828+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.828+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1282 } 2015-04-01T16:21:57.828+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1283 } 2015-04-01T16:21:57.828+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1284 } 2015-04-01T16:21:57.829+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|675, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.830+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.831+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.832+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1285 } 2015-04-01T16:21:57.832+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1286 } 2015-04-01T16:21:57.832+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|677, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.833+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.834+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.834+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1287 } 2015-04-01T16:21:57.834+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1288 } 2015-04-01T16:21:57.835+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1289 } 2015-04-01T16:21:57.835+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1290 } 2015-04-01T16:21:57.835+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|681, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.836+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.837+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.837+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1291 } 2015-04-01T16:21:57.838+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1292 } 2015-04-01T16:21:57.838+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1293 } 2015-04-01T16:21:57.838+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|684, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.839+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.839+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.839+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1294 } 2015-04-01T16:21:57.840+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|685, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.842+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.842+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.842+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.842+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1295 } 2015-04-01T16:21:57.843+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1296 } 2015-04-01T16:21:57.843+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1297 } 2015-04-01T16:21:57.843+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|688, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.845+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.846+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.847+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1298 } 2015-04-01T16:21:57.847+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1299 } 2015-04-01T16:21:57.847+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1300 } 2015-04-01T16:21:57.847+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|691, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.849+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.849+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.850+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1301 } 2015-04-01T16:21:57.850+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1302 } 2015-04-01T16:21:57.850+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|693, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.851+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.851+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.851+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1303 } 2015-04-01T16:21:57.852+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1304 } 2015-04-01T16:21:57.852+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|695, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.855+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.855+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1305 } 2015-04-01T16:21:57.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1306 } 2015-04-01T16:21:57.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1307 } 2015-04-01T16:21:57.856+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1308 } 2015-04-01T16:21:57.856+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|699, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.859+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.859+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.859+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1309 } 2015-04-01T16:21:57.859+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1310 } 2015-04-01T16:21:57.859+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|701, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.861+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.861+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.862+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.863+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1311 } 2015-04-01T16:21:57.863+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1312 } 2015-04-01T16:21:57.863+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|703, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.865+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.866+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.866+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1313 } 2015-04-01T16:21:57.867+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1314 } 2015-04-01T16:21:57.867+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1315 } 2015-04-01T16:21:57.867+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|706, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.868+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.869+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.869+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1316 } 2015-04-01T16:21:57.869+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1317 } 2015-04-01T16:21:57.869+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|708, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.871+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.872+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.872+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1318 } 2015-04-01T16:21:57.872+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1319 } 2015-04-01T16:21:57.872+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1320 } 2015-04-01T16:21:57.873+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|711, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.875+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.876+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.877+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1321 } 2015-04-01T16:21:57.877+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1322 } 2015-04-01T16:21:57.877+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1323 } 2015-04-01T16:21:57.878+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|714, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.879+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.879+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.879+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1324 } 2015-04-01T16:21:57.880+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1325 } 2015-04-01T16:21:57.880+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.881+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|716, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.881+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.881+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.881+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1326 } 2015-04-01T16:21:57.881+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1327 } 2015-04-01T16:21:57.881+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1328 } 2015-04-01T16:21:57.882+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|719, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.884+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.885+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.885+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1329 } 2015-04-01T16:21:57.885+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|720, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.887+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.888+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.888+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1330 } 2015-04-01T16:21:57.888+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1331 } 2015-04-01T16:21:57.888+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1332 } 2015-04-01T16:21:57.889+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.889+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|723, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.889+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.889+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1333 } 2015-04-01T16:21:57.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1334 } 2015-04-01T16:21:57.890+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|725, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.892+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.893+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1335 } 2015-04-01T16:21:57.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1336 } 2015-04-01T16:21:57.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1337 } 2015-04-01T16:21:57.894+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|728, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.895+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.895+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.895+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1338 } 2015-04-01T16:21:57.896+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|729, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.898+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.899+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1339 } 2015-04-01T16:21:57.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1340 } 2015-04-01T16:21:57.899+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|731, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.901+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.902+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.903+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1341 } 2015-04-01T16:21:57.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1342 } 2015-04-01T16:21:57.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1343 } 2015-04-01T16:21:57.904+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|734, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.904+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.905+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1344 } 2015-04-01T16:21:57.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1345 } 2015-04-01T16:21:57.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1346 } 2015-04-01T16:21:57.905+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|737, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.907+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.907+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1347 } 2015-04-01T16:21:57.908+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|738, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.911+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.911+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1348 } 2015-04-01T16:21:57.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1349 } 2015-04-01T16:21:57.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1350 } 2015-04-01T16:21:57.912+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|741, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.915+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.917+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1351 } 2015-04-01T16:21:57.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1352 } 2015-04-01T16:21:57.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1353 } 2015-04-01T16:21:57.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1354 } 2015-04-01T16:21:57.918+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.919+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|745, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.920+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.921+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1355 } 2015-04-01T16:21:57.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1356 } 2015-04-01T16:21:57.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1357 } 2015-04-01T16:21:57.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1358 } 2015-04-01T16:21:57.923+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.923+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|749, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.924+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1359 } 2015-04-01T16:21:57.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1360 } 2015-04-01T16:21:57.925+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|751, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.925+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.926+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1361 } 2015-04-01T16:21:57.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1362 } 2015-04-01T16:21:57.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1363 } 2015-04-01T16:21:57.926+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|754, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.932+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.932+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1364 } 2015-04-01T16:21:57.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1365 } 2015-04-01T16:21:57.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1366 } 2015-04-01T16:21:57.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1367 } 2015-04-01T16:21:57.934+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|758, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.935+0000 D REPL [rsBackgroundSync] bgsync buffer has 297 bytes 2015-04-01T16:21:57.936+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.937+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1368 } 2015-04-01T16:21:57.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1369 } 2015-04-01T16:21:57.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1370 } 2015-04-01T16:21:57.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1371 } 2015-04-01T16:21:57.938+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|762, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.938+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.939+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1372 } 2015-04-01T16:21:57.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1373 } 2015-04-01T16:21:57.940+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|764, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.942+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.942+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1374 } 2015-04-01T16:21:57.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1375 } 2015-04-01T16:21:57.943+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|766, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.944+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.946+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.946+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1376 } 2015-04-01T16:21:57.946+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1377 } 2015-04-01T16:21:57.947+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1378 } 2015-04-01T16:21:57.948+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.948+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|769, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.949+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.949+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1379 } 2015-04-01T16:21:57.949+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1380 } 2015-04-01T16:21:57.950+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|771, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.951+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.951+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.952+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1381 } 2015-04-01T16:21:57.952+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1382 } 2015-04-01T16:21:57.952+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1383 } 2015-04-01T16:21:57.953+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|774, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.954+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:57.955+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.955+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.955+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1384 } 2015-04-01T16:21:57.955+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1385 } 2015-04-01T16:21:57.956+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1386 } 2015-04-01T16:21:57.956+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|777, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.957+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.957+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.957+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1387 } 2015-04-01T16:21:57.958+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|778, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.960+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.961+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.961+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1388 } 2015-04-01T16:21:57.962+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1389 } 2015-04-01T16:21:57.962+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1390 } 2015-04-01T16:21:57.962+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|781, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.964+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.965+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.965+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1391 } 2015-04-01T16:21:57.965+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1392 } 2015-04-01T16:21:57.965+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1393 } 2015-04-01T16:21:57.965+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1394 } 2015-04-01T16:21:57.966+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|785, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.968+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.969+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.969+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1395 } 2015-04-01T16:21:57.969+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1396 } 2015-04-01T16:21:57.969+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|787, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.971+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.972+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.972+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1397 } 2015-04-01T16:21:57.972+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1398 } 2015-04-01T16:21:57.972+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1399 } 2015-04-01T16:21:57.972+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|790, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.974+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:57.974+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.975+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.975+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1400 } 2015-04-01T16:21:57.976+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1401 } 2015-04-01T16:21:57.976+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1402 } 2015-04-01T16:21:57.976+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|793, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.978+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.979+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.979+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1403 } 2015-04-01T16:21:57.980+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1404 } 2015-04-01T16:21:57.981+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.981+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|795, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.982+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.982+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1405 } 2015-04-01T16:21:57.982+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1406 } 2015-04-01T16:21:57.983+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1407 } 2015-04-01T16:21:57.983+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|798, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.984+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.985+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.985+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1408 } 2015-04-01T16:21:57.985+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1409 } 2015-04-01T16:21:57.985+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|800, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.987+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.988+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:57.988+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1410 } 2015-04-01T16:21:57.988+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1411 } 2015-04-01T16:21:57.988+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1412 } 2015-04-01T16:21:57.988+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|803, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.990+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.990+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:57.990+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1413 } 2015-04-01T16:21:57.990+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1414 } 2015-04-01T16:21:57.990+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|805, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.992+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.992+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:57.993+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1415 } 2015-04-01T16:21:57.993+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|806, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:57.996+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:57.996+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:57.997+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:57.998+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1416 } 2015-04-01T16:21:57.998+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1417 } 2015-04-01T16:21:57.998+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1418 } 2015-04-01T16:21:57.998+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1419 } 2015-04-01T16:21:57.999+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|810, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.001+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.001+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.001+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1420 } 2015-04-01T16:21:58.001+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1421 } 2015-04-01T16:21:58.002+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1422 } 2015-04-01T16:21:58.002+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1423 } 2015-04-01T16:21:58.002+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905317000|814, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.004+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.004+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.004+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1424 } 2015-04-01T16:21:58.004+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1425 } 2015-04-01T16:21:58.004+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1426 } 2015-04-01T16:21:58.005+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.006+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.007+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.008+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1427 } 2015-04-01T16:21:58.008+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1428 } 2015-04-01T16:21:58.008+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.009+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.010+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.011+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1429 } 2015-04-01T16:21:58.011+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|6, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.012+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.013+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.014+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.014+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1430 } 2015-04-01T16:21:58.014+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1431 } 2015-04-01T16:21:58.014+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.016+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.018+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.018+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1432 } 2015-04-01T16:21:58.018+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1433 } 2015-04-01T16:21:58.018+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1434 } 2015-04-01T16:21:58.019+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1435 } 2015-04-01T16:21:58.020+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.020+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.021+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.022+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1436 } 2015-04-01T16:21:58.022+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1437 } 2015-04-01T16:21:58.022+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1438 } 2015-04-01T16:21:58.023+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.023+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.025+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.025+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1439 } 2015-04-01T16:21:58.025+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1440 } 2015-04-01T16:21:58.026+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1441 } 2015-04-01T16:21:58.026+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|18, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.027+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.027+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.028+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1442 } 2015-04-01T16:21:58.028+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1443 } 2015-04-01T16:21:58.028+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1444 } 2015-04-01T16:21:58.028+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|21, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.029+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.030+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.030+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1445 } 2015-04-01T16:21:58.030+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|22, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.032+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:58.033+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.034+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.034+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1446 } 2015-04-01T16:21:58.034+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1447 } 2015-04-01T16:21:58.035+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|24, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.036+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.036+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.036+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1448 } 2015-04-01T16:21:58.036+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1449 } 2015-04-01T16:21:58.037+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1450 } 2015-04-01T16:21:58.037+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|27, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.039+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.039+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.040+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1451 } 2015-04-01T16:21:58.040+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1452 } 2015-04-01T16:21:58.040+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1453 } 2015-04-01T16:21:58.040+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|30, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.043+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.043+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.043+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1454 } 2015-04-01T16:21:58.043+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1455 } 2015-04-01T16:21:58.045+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|32, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.046+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.046+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.046+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1456 } 2015-04-01T16:21:58.046+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|33, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.050+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.050+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.050+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1457 } 2015-04-01T16:21:58.050+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1458 } 2015-04-01T16:21:58.050+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1459 } 2015-04-01T16:21:58.051+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|36, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.052+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.053+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.053+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1460 } 2015-04-01T16:21:58.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1461 } 2015-04-01T16:21:58.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1462 } 2015-04-01T16:21:58.055+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|39, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.057+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.057+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.057+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1463 } 2015-04-01T16:21:58.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1464 } 2015-04-01T16:21:58.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1465 } 2015-04-01T16:21:58.058+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|42, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.059+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.060+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.060+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1466 } 2015-04-01T16:21:58.060+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1467 } 2015-04-01T16:21:58.061+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|44, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.062+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.062+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.063+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1468 } 2015-04-01T16:21:58.063+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1469 } 2015-04-01T16:21:58.064+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|46, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.066+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.067+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.067+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1470 } 2015-04-01T16:21:58.067+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1471 } 2015-04-01T16:21:58.067+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1472 } 2015-04-01T16:21:58.068+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|49, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.069+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.070+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.070+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1473 } 2015-04-01T16:21:58.071+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1474 } 2015-04-01T16:21:58.071+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1475 } 2015-04-01T16:21:58.072+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|52, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.073+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:58.073+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.074+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.074+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1476 } 2015-04-01T16:21:58.074+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1477 } 2015-04-01T16:21:58.074+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1478 } 2015-04-01T16:21:58.075+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|55, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.076+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.078+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.078+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1479 } 2015-04-01T16:21:58.078+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1480 } 2015-04-01T16:21:58.079+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1481 } 2015-04-01T16:21:58.079+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|58, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.080+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.080+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.081+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1482 } 2015-04-01T16:21:58.081+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1483 } 2015-04-01T16:21:58.081+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1484 } 2015-04-01T16:21:58.081+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|61, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.084+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.084+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.085+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1485 } 2015-04-01T16:21:58.085+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1486 } 2015-04-01T16:21:58.085+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1487 } 2015-04-01T16:21:58.086+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|64, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.087+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.088+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.089+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1488 } 2015-04-01T16:21:58.089+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1489 } 2015-04-01T16:21:58.089+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1490 } 2015-04-01T16:21:58.089+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|67, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.090+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:58.090+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.091+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1491 } 2015-04-01T16:21:58.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1492 } 2015-04-01T16:21:58.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1493 } 2015-04-01T16:21:58.092+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|70, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.093+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.094+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.094+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1494 } 2015-04-01T16:21:58.095+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1495 } 2015-04-01T16:21:58.095+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|72, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.097+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.098+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.098+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1496 } 2015-04-01T16:21:58.098+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1497 } 2015-04-01T16:21:58.099+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1498 } 2015-04-01T16:21:58.099+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|75, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.100+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.101+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.101+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1499 } 2015-04-01T16:21:58.101+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1500 } 2015-04-01T16:21:58.101+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1501 } 2015-04-01T16:21:58.102+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|78, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.104+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.104+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.105+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1502 } 2015-04-01T16:21:58.105+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1503 } 2015-04-01T16:21:58.105+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1504 } 2015-04-01T16:21:58.106+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|81, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.107+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.108+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.108+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.108+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1505 } 2015-04-01T16:21:58.108+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1506 } 2015-04-01T16:21:58.108+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1507 } 2015-04-01T16:21:58.108+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1508 } 2015-04-01T16:21:58.109+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|85, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.111+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.111+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.112+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1509 } 2015-04-01T16:21:58.112+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1510 } 2015-04-01T16:21:58.112+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|87, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.114+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.115+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.115+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1511 } 2015-04-01T16:21:58.116+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1512 } 2015-04-01T16:21:58.116+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1513 } 2015-04-01T16:21:58.116+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|90, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.118+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.118+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.118+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1514 } 2015-04-01T16:21:58.119+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1515 } 2015-04-01T16:21:58.119+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1516 } 2015-04-01T16:21:58.119+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|93, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.120+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.120+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.121+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1517 } 2015-04-01T16:21:58.121+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1518 } 2015-04-01T16:21:58.121+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|95, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.123+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.123+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.124+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1519 } 2015-04-01T16:21:58.124+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1520 } 2015-04-01T16:21:58.125+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|97, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.126+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:58.127+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.128+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.128+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1521 } 2015-04-01T16:21:58.128+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1522 } 2015-04-01T16:21:58.128+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1523 } 2015-04-01T16:21:58.128+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|100, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.129+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.129+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.130+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1524 } 2015-04-01T16:21:58.130+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1525 } 2015-04-01T16:21:58.130+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|102, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.132+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.132+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.132+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1526 } 2015-04-01T16:21:58.133+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1527 } 2015-04-01T16:21:58.133+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|104, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.135+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.135+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.136+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1528 } 2015-04-01T16:21:58.136+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1529 } 2015-04-01T16:21:58.136+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|106, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.139+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.139+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.139+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1530 } 2015-04-01T16:21:58.140+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1531 } 2015-04-01T16:21:58.140+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1532 } 2015-04-01T16:21:58.140+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|109, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.141+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.141+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.141+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1533 } 2015-04-01T16:21:58.142+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1534 } 2015-04-01T16:21:58.142+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|111, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.144+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.144+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.145+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.145+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1535 } 2015-04-01T16:21:58.145+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1536 } 2015-04-01T16:21:58.146+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|113, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.147+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.147+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.147+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1537 } 2015-04-01T16:21:58.148+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|114, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.150+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.150+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.151+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1538 } 2015-04-01T16:21:58.151+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1539 } 2015-04-01T16:21:58.151+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|116, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.153+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.154+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.154+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1540 } 2015-04-01T16:21:58.154+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1541 } 2015-04-01T16:21:58.154+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1542 } 2015-04-01T16:21:58.154+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|119, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.156+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.156+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.156+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1543 } 2015-04-01T16:21:58.156+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1544 } 2015-04-01T16:21:58.157+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|121, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.159+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.159+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1545 } 2015-04-01T16:21:58.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1546 } 2015-04-01T16:21:58.160+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|123, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.162+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.163+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1547 } 2015-04-01T16:21:58.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1548 } 2015-04-01T16:21:58.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1549 } 2015-04-01T16:21:58.164+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|126, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.166+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.166+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.166+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1550 } 2015-04-01T16:21:58.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1551 } 2015-04-01T16:21:58.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1552 } 2015-04-01T16:21:58.167+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|129, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.169+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.170+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1553 } 2015-04-01T16:21:58.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1554 } 2015-04-01T16:21:58.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1555 } 2015-04-01T16:21:58.170+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|132, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.172+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.173+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1556 } 2015-04-01T16:21:58.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1557 } 2015-04-01T16:21:58.173+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|134, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.175+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.176+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1558 } 2015-04-01T16:21:58.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1559 } 2015-04-01T16:21:58.177+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|136, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.179+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.179+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.179+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1560 } 2015-04-01T16:21:58.179+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1561 } 2015-04-01T16:21:58.179+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1562 } 2015-04-01T16:21:58.180+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|139, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.181+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.181+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.181+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1563 } 2015-04-01T16:21:58.182+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|140, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.184+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:58.184+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.185+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.185+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1564 } 2015-04-01T16:21:58.185+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1565 } 2015-04-01T16:21:58.185+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1566 } 2015-04-01T16:21:58.186+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|143, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.188+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.188+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.189+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1567 } 2015-04-01T16:21:58.189+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1568 } 2015-04-01T16:21:58.189+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1569 } 2015-04-01T16:21:58.189+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|146, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.191+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.191+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.191+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1570 } 2015-04-01T16:21:58.191+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1571 } 2015-04-01T16:21:58.192+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|148, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.193+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.195+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1572 } 2015-04-01T16:21:58.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1573 } 2015-04-01T16:21:58.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1574 } 2015-04-01T16:21:58.195+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|151, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.196+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.196+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.197+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1575 } 2015-04-01T16:21:58.197+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|152, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.199+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.199+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1576 } 2015-04-01T16:21:58.200+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1577 } 2015-04-01T16:21:58.200+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1578 } 2015-04-01T16:21:58.201+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|155, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.202+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:58.202+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.203+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.204+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1579 } 2015-04-01T16:21:58.204+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1580 } 2015-04-01T16:21:58.204+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1581 } 2015-04-01T16:21:58.204+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|158, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.205+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.205+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.206+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1582 } 2015-04-01T16:21:58.206+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|159, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.208+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.209+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.210+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1583 } 2015-04-01T16:21:58.210+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1584 } 2015-04-01T16:21:58.210+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1585 } 2015-04-01T16:21:58.210+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1586 } 2015-04-01T16:21:58.210+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|163, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.214+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.214+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.215+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1587 } 2015-04-01T16:21:58.215+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1588 } 2015-04-01T16:21:58.216+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|165, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.218+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.218+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.218+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1589 } 2015-04-01T16:21:58.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1590 } 2015-04-01T16:21:58.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1591 } 2015-04-01T16:21:58.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1592 } 2015-04-01T16:21:58.219+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|169, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.220+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.220+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.221+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1593 } 2015-04-01T16:21:58.221+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1594 } 2015-04-01T16:21:58.222+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|171, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.223+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.224+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.224+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.224+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1595 } 2015-04-01T16:21:58.224+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1596 } 2015-04-01T16:21:58.225+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1597 } 2015-04-01T16:21:58.225+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|174, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.227+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.229+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.229+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1598 } 2015-04-01T16:21:58.229+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1599 } 2015-04-01T16:21:58.229+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1600 } 2015-04-01T16:21:58.229+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1601 } 2015-04-01T16:21:58.230+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.230+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|178, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.232+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1602 } 2015-04-01T16:21:58.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1603 } 2015-04-01T16:21:58.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1604 } 2015-04-01T16:21:58.234+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|181, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.234+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.235+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.235+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1605 } 2015-04-01T16:21:58.235+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1606 } 2015-04-01T16:21:58.235+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|183, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.236+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.236+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.237+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1607 } 2015-04-01T16:21:58.237+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|184, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.239+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.239+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.239+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1608 } 2015-04-01T16:21:58.240+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1609 } 2015-04-01T16:21:58.240+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|186, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.242+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.242+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.243+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.243+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1610 } 2015-04-01T16:21:58.243+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1611 } 2015-04-01T16:21:58.243+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1612 } 2015-04-01T16:21:58.244+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|189, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.246+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.246+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.247+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1613 } 2015-04-01T16:21:58.247+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1614 } 2015-04-01T16:21:58.247+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|191, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.248+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.248+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.248+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1615 } 2015-04-01T16:21:58.248+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1616 } 2015-04-01T16:21:58.249+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|193, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.251+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.251+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.252+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1617 } 2015-04-01T16:21:58.252+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1618 } 2015-04-01T16:21:58.252+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|195, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.254+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.255+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.255+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1619 } 2015-04-01T16:21:58.255+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1620 } 2015-04-01T16:21:58.256+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1621 } 2015-04-01T16:21:58.256+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|198, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.258+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.259+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.259+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1622 } 2015-04-01T16:21:58.259+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1623 } 2015-04-01T16:21:58.259+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1624 } 2015-04-01T16:21:58.259+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|201, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.260+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.261+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.261+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.261+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1625 } 2015-04-01T16:21:58.261+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1626 } 2015-04-01T16:21:58.261+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|203, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.263+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.263+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.263+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1627 } 2015-04-01T16:21:58.264+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|204, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.266+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.267+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.267+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1628 } 2015-04-01T16:21:58.267+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1629 } 2015-04-01T16:21:58.267+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1630 } 2015-04-01T16:21:58.268+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|207, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.270+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.270+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1631 } 2015-04-01T16:21:58.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1632 } 2015-04-01T16:21:58.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1633 } 2015-04-01T16:21:58.271+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|210, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.272+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.273+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.273+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1634 } 2015-04-01T16:21:58.274+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1635 } 2015-04-01T16:21:58.274+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1636 } 2015-04-01T16:21:58.274+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|213, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.275+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.275+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.276+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1637 } 2015-04-01T16:21:58.276+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|214, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.278+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.278+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.278+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1638 } 2015-04-01T16:21:58.279+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1639 } 2015-04-01T16:21:58.279+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|216, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.281+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.281+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.282+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.282+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1640 } 2015-04-01T16:21:58.282+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1641 } 2015-04-01T16:21:58.282+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1642 } 2015-04-01T16:21:58.283+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|219, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.284+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.285+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.285+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1643 } 2015-04-01T16:21:58.285+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1644 } 2015-04-01T16:21:58.285+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1645 } 2015-04-01T16:21:58.285+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|222, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.287+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.287+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1646 } 2015-04-01T16:21:58.288+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|223, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.290+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.290+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.291+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1647 } 2015-04-01T16:21:58.291+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1648 } 2015-04-01T16:21:58.291+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1649 } 2015-04-01T16:21:58.292+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|226, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.293+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.293+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.294+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1650 } 2015-04-01T16:21:58.294+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1651 } 2015-04-01T16:21:58.294+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|228, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.296+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.297+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.298+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1652 } 2015-04-01T16:21:58.298+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1653 } 2015-04-01T16:21:58.298+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1654 } 2015-04-01T16:21:58.299+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|231, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.299+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.299+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.300+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.301+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1655 } 2015-04-01T16:21:58.301+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1656 } 2015-04-01T16:21:58.301+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1657 } 2015-04-01T16:21:58.302+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|234, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.303+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.304+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.304+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1658 } 2015-04-01T16:21:58.304+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1659 } 2015-04-01T16:21:58.305+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|236, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.305+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.306+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.306+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1660 } 2015-04-01T16:21:58.306+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1661 } 2015-04-01T16:21:58.307+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|238, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.309+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.310+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.310+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1662 } 2015-04-01T16:21:58.310+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1663 } 2015-04-01T16:21:58.310+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1664 } 2015-04-01T16:21:58.311+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.311+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|241, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.312+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.313+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1665 } 2015-04-01T16:21:58.313+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1666 } 2015-04-01T16:21:58.313+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|243, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.315+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.316+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.316+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1667 } 2015-04-01T16:21:58.316+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1668 } 2015-04-01T16:21:58.317+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1669 } 2015-04-01T16:21:58.317+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|246, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.318+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.319+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.320+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.320+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1670 } 2015-04-01T16:21:58.320+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1671 } 2015-04-01T16:21:58.321+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|248, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.321+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.322+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.322+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1672 } 2015-04-01T16:21:58.323+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1673 } 2015-04-01T16:21:58.323+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1674 } 2015-04-01T16:21:58.323+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|251, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.325+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.325+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.325+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1675 } 2015-04-01T16:21:58.326+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1676 } 2015-04-01T16:21:58.326+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1677 } 2015-04-01T16:21:58.327+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|254, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.329+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.330+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.330+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1678 } 2015-04-01T16:21:58.330+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1679 } 2015-04-01T16:21:58.330+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1680 } 2015-04-01T16:21:58.330+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|257, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.331+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.332+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.332+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1681 } 2015-04-01T16:21:58.332+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1682 } 2015-04-01T16:21:58.333+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|259, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.335+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.336+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.336+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1683 } 2015-04-01T16:21:58.336+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1684 } 2015-04-01T16:21:58.336+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1685 } 2015-04-01T16:21:58.337+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|262, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.338+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:58.338+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.338+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.339+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1686 } 2015-04-01T16:21:58.339+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1687 } 2015-04-01T16:21:58.339+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1688 } 2015-04-01T16:21:58.339+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|265, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.341+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.342+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.343+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1689 } 2015-04-01T16:21:58.343+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1690 } 2015-04-01T16:21:58.344+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|267, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.345+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.345+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.345+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1691 } 2015-04-01T16:21:58.346+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1692 } 2015-04-01T16:21:58.346+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1693 } 2015-04-01T16:21:58.346+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|270, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.347+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.349+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.349+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1694 } 2015-04-01T16:21:58.349+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1695 } 2015-04-01T16:21:58.349+0000 D QUERY [rsSync] local.oplog.rs: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:58.349+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|272, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.350+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.350+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.351+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1696 } 2015-04-01T16:21:58.351+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1697 } 2015-04-01T16:21:58.351+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|274, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.353+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.353+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.354+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1698 } 2015-04-01T16:21:58.354+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1699 } 2015-04-01T16:21:58.354+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|276, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.356+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.356+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.356+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.357+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1700 } 2015-04-01T16:21:58.357+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1701 } 2015-04-01T16:21:58.358+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|278, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.359+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.360+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.360+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1702 } 2015-04-01T16:21:58.360+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1703 } 2015-04-01T16:21:58.360+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1704 } 2015-04-01T16:21:58.361+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|281, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.362+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.362+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.363+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1705 } 2015-04-01T16:21:58.363+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1706 } 2015-04-01T16:21:58.363+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|283, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.365+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.366+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.366+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1707 } 2015-04-01T16:21:58.366+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1708 } 2015-04-01T16:21:58.366+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1709 } 2015-04-01T16:21:58.367+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|286, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.368+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.368+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.368+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1710 } 2015-04-01T16:21:58.369+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1711 } 2015-04-01T16:21:58.369+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|288, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.371+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.372+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.373+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1712 } 2015-04-01T16:21:58.373+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1713 } 2015-04-01T16:21:58.373+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1714 } 2015-04-01T16:21:58.373+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|291, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.374+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.374+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.375+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.375+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1715 } 2015-04-01T16:21:58.375+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1716 } 2015-04-01T16:21:58.376+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|293, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.377+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.378+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.378+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1717 } 2015-04-01T16:21:58.378+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1718 } 2015-04-01T16:21:58.379+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|295, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.380+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.380+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.380+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1719 } 2015-04-01T16:21:58.381+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|296, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.383+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.384+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1720 } 2015-04-01T16:21:58.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1721 } 2015-04-01T16:21:58.384+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1722 } 2015-04-01T16:21:58.385+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|299, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.387+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.387+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.387+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1723 } 2015-04-01T16:21:58.387+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1724 } 2015-04-01T16:21:58.388+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1725 } 2015-04-01T16:21:58.388+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|302, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.389+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.389+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.390+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1726 } 2015-04-01T16:21:58.390+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|303, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.392+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.393+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.393+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1727 } 2015-04-01T16:21:58.394+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1728 } 2015-04-01T16:21:58.394+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1729 } 2015-04-01T16:21:58.394+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1730 } 2015-04-01T16:21:58.394+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|307, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.395+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:58.395+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.396+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.397+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1731 } 2015-04-01T16:21:58.397+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1732 } 2015-04-01T16:21:58.398+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|309, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.399+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.400+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.400+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1733 } 2015-04-01T16:21:58.400+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1734 } 2015-04-01T16:21:58.400+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|311, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.402+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.402+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.403+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1735 } 2015-04-01T16:21:58.403+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1736 } 2015-04-01T16:21:58.403+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1737 } 2015-04-01T16:21:58.403+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|314, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.404+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.405+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.405+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1738 } 2015-04-01T16:21:58.405+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1739 } 2015-04-01T16:21:58.406+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|316, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.407+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.407+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.408+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1740 } 2015-04-01T16:21:58.408+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1741 } 2015-04-01T16:21:58.408+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|318, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.410+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.411+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.411+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1742 } 2015-04-01T16:21:58.411+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1743 } 2015-04-01T16:21:58.411+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|320, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.413+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:58.413+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.414+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.414+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1744 } 2015-04-01T16:21:58.414+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1745 } 2015-04-01T16:21:58.414+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1746 } 2015-04-01T16:21:58.415+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|323, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.417+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.417+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.417+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1747 } 2015-04-01T16:21:58.417+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1748 } 2015-04-01T16:21:58.418+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|325, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.419+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.419+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.419+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1749 } 2015-04-01T16:21:58.420+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1750 } 2015-04-01T16:21:58.420+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|327, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.422+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.422+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.423+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1751 } 2015-04-01T16:21:58.423+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1752 } 2015-04-01T16:21:58.423+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|329, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.425+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.426+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.426+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1753 } 2015-04-01T16:21:58.426+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1754 } 2015-04-01T16:21:58.426+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1755 } 2015-04-01T16:21:58.427+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|332, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.428+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.428+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.429+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1756 } 2015-04-01T16:21:58.429+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1757 } 2015-04-01T16:21:58.429+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|334, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.431+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.432+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.432+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1758 } 2015-04-01T16:21:58.432+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1759 } 2015-04-01T16:21:58.432+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1760 } 2015-04-01T16:21:58.433+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|337, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.434+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:58.434+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.435+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.435+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1761 } 2015-04-01T16:21:58.435+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1762 } 2015-04-01T16:21:58.436+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|339, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.438+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.438+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.438+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1763 } 2015-04-01T16:21:58.438+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1764 } 2015-04-01T16:21:58.438+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1765 } 2015-04-01T16:21:58.439+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|342, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.440+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.441+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.441+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1766 } 2015-04-01T16:21:58.441+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1767 } 2015-04-01T16:21:58.441+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|344, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.444+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.444+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.444+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1768 } 2015-04-01T16:21:58.444+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1769 } 2015-04-01T16:21:58.444+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1770 } 2015-04-01T16:21:58.445+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|347, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.447+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.447+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.447+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1771 } 2015-04-01T16:21:58.447+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1772 } 2015-04-01T16:21:58.448+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|349, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.449+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.450+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.450+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1773 } 2015-04-01T16:21:58.450+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1774 } 2015-04-01T16:21:58.451+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|351, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.453+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.453+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.453+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.453+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1775 } 2015-04-01T16:21:58.454+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1776 } 2015-04-01T16:21:58.454+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1777 } 2015-04-01T16:21:58.454+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|354, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.456+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.456+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.456+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1778 } 2015-04-01T16:21:58.457+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1779 } 2015-04-01T16:21:58.457+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|356, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.459+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.459+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.459+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1780 } 2015-04-01T16:21:58.460+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1781 } 2015-04-01T16:21:58.460+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|358, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.462+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.463+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.463+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1782 } 2015-04-01T16:21:58.463+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1783 } 2015-04-01T16:21:58.463+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|360, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.466+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.466+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.466+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1784 } 2015-04-01T16:21:58.466+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1785 } 2015-04-01T16:21:58.467+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1786 } 2015-04-01T16:21:58.467+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|363, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.468+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.470+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.470+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1787 } 2015-04-01T16:21:58.470+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1788 } 2015-04-01T16:21:58.471+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|365, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.472+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:58.472+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.474+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.474+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1789 } 2015-04-01T16:21:58.474+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1790 } 2015-04-01T16:21:58.474+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1791 } 2015-04-01T16:21:58.474+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1792 } 2015-04-01T16:21:58.475+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|369, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.475+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.475+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.476+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1793 } 2015-04-01T16:21:58.476+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1794 } 2015-04-01T16:21:58.476+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1795 } 2015-04-01T16:21:58.476+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|372, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.478+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.478+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.478+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1796 } 2015-04-01T16:21:58.479+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|373, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.481+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.482+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.485+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1797 } 2015-04-01T16:21:58.486+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1798 } 2015-04-01T16:21:58.486+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1799 } 2015-04-01T16:21:58.488+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|376, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.488+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.489+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.489+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1800 } 2015-04-01T16:21:58.490+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1801 } 2015-04-01T16:21:58.490+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1802 } 2015-04-01T16:21:58.490+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1803 } 2015-04-01T16:21:58.490+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|380, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.491+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.492+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.492+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1804 } 2015-04-01T16:21:58.492+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1805 } 2015-04-01T16:21:58.493+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|382, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.494+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:58.494+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.494+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.494+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1806 } 2015-04-01T16:21:58.495+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1807 } 2015-04-01T16:21:58.495+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|384, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.497+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.498+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.498+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1808 } 2015-04-01T16:21:58.498+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1809 } 2015-04-01T16:21:58.499+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|386, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.501+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.502+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.502+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1810 } 2015-04-01T16:21:58.502+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1811 } 2015-04-01T16:21:58.503+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1812 } 2015-04-01T16:21:58.503+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1813 } 2015-04-01T16:21:58.504+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|390, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.504+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.505+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.506+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1814 } 2015-04-01T16:21:58.507+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|391, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.507+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.508+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.508+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1815 } 2015-04-01T16:21:58.509+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1816 } 2015-04-01T16:21:58.509+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1817 } 2015-04-01T16:21:58.510+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|394, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.510+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.510+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.510+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1818 } 2015-04-01T16:21:58.511+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1819 } 2015-04-01T16:21:58.511+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|396, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.512+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.513+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.514+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.514+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1820 } 2015-04-01T16:21:58.514+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1821 } 2015-04-01T16:21:58.515+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|398, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.516+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.517+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.518+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1822 } 2015-04-01T16:21:58.518+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1823 } 2015-04-01T16:21:58.518+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1824 } 2015-04-01T16:21:58.519+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|401, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.520+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.521+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.521+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1825 } 2015-04-01T16:21:58.521+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1826 } 2015-04-01T16:21:58.522+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1827 } 2015-04-01T16:21:58.522+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1828 } 2015-04-01T16:21:58.523+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.523+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|405, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.524+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.524+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1829 } 2015-04-01T16:21:58.524+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1830 } 2015-04-01T16:21:58.524+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|407, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.526+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.526+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.526+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1831 } 2015-04-01T16:21:58.526+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1832 } 2015-04-01T16:21:58.526+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|409, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.528+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.528+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.528+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1833 } 2015-04-01T16:21:58.529+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|410, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.531+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:58.531+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.531+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.532+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1834 } 2015-04-01T16:21:58.532+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1835 } 2015-04-01T16:21:58.532+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1836 } 2015-04-01T16:21:58.533+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|413, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.534+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.535+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1837 } 2015-04-01T16:21:58.535+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1838 } 2015-04-01T16:21:58.536+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|415, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.537+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.537+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.538+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1839 } 2015-04-01T16:21:58.538+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1840 } 2015-04-01T16:21:58.538+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|417, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.541+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.542+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.542+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1841 } 2015-04-01T16:21:58.542+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1842 } 2015-04-01T16:21:58.542+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1843 } 2015-04-01T16:21:58.543+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|420, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.543+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.544+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.544+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1844 } 2015-04-01T16:21:58.544+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1845 } 2015-04-01T16:21:58.545+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1846 } 2015-04-01T16:21:58.545+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|423, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.547+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.548+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.549+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1847 } 2015-04-01T16:21:58.549+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1848 } 2015-04-01T16:21:58.549+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:58.549+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|425, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.550+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.551+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.552+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1849 } 2015-04-01T16:21:58.552+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1850 } 2015-04-01T16:21:58.552+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1851 } 2015-04-01T16:21:58.553+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|428, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.553+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.554+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.554+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1852 } 2015-04-01T16:21:58.554+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1853 } 2015-04-01T16:21:58.555+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|430, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.555+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.555+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.555+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1854 } 2015-04-01T16:21:58.556+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1855 } 2015-04-01T16:21:58.556+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|432, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.558+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.559+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.559+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1856 } 2015-04-01T16:21:58.559+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1857 } 2015-04-01T16:21:58.559+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|434, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.562+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.563+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.564+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1858 } 2015-04-01T16:21:58.564+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1859 } 2015-04-01T16:21:58.564+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1860 } 2015-04-01T16:21:58.564+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|437, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.565+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.567+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:58.567+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.568+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1861 } 2015-04-01T16:21:58.568+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1862 } 2015-04-01T16:21:58.569+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1863 } 2015-04-01T16:21:58.569+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|440, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.569+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.571+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.571+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1864 } 2015-04-01T16:21:58.571+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1865 } 2015-04-01T16:21:58.571+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1866 } 2015-04-01T16:21:58.572+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|443, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.572+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.573+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.573+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1867 } 2015-04-01T16:21:58.573+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1868 } 2015-04-01T16:21:58.573+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1869 } 2015-04-01T16:21:58.574+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|446, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.574+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.574+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.574+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1870 } 2015-04-01T16:21:58.575+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|447, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.580+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.581+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1871 } 2015-04-01T16:21:58.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1872 } 2015-04-01T16:21:58.581+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1873 } 2015-04-01T16:21:58.582+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|450, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.584+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.585+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.585+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1874 } 2015-04-01T16:21:58.585+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1875 } 2015-04-01T16:21:58.585+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1876 } 2015-04-01T16:21:58.586+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|453, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.587+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.588+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.588+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1877 } 2015-04-01T16:21:58.588+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1878 } 2015-04-01T16:21:58.589+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1879 } 2015-04-01T16:21:58.589+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.589+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|456, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.589+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.590+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.590+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1880 } 2015-04-01T16:21:58.590+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1881 } 2015-04-01T16:21:58.590+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|458, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.592+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.593+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.593+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1882 } 2015-04-01T16:21:58.594+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|459, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.596+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.597+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.597+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1883 } 2015-04-01T16:21:58.597+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1884 } 2015-04-01T16:21:58.597+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1885 } 2015-04-01T16:21:58.598+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|462, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.599+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.599+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.600+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1886 } 2015-04-01T16:21:58.600+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1887 } 2015-04-01T16:21:58.600+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1888 } 2015-04-01T16:21:58.600+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|465, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.602+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.603+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.603+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1889 } 2015-04-01T16:21:58.604+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1890 } 2015-04-01T16:21:58.604+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|467, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.606+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.606+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.607+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1891 } 2015-04-01T16:21:58.607+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1892 } 2015-04-01T16:21:58.608+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1893 } 2015-04-01T16:21:58.608+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1894 } 2015-04-01T16:21:58.608+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|471, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.610+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.611+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.612+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.612+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1895 } 2015-04-01T16:21:58.613+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1896 } 2015-04-01T16:21:58.614+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1897 } 2015-04-01T16:21:58.614+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1898 } 2015-04-01T16:21:58.615+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|475, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.615+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.616+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.616+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1899 } 2015-04-01T16:21:58.616+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1900 } 2015-04-01T16:21:58.617+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1901 } 2015-04-01T16:21:58.617+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1902 } 2015-04-01T16:21:58.618+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|479, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.619+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.619+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.620+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1903 } 2015-04-01T16:21:58.620+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1904 } 2015-04-01T16:21:58.621+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1905 } 2015-04-01T16:21:58.621+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1906 } 2015-04-01T16:21:58.622+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|483, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.622+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.623+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.624+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1907 } 2015-04-01T16:21:58.624+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.624+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1908 } 2015-04-01T16:21:58.625+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1909 } 2015-04-01T16:21:58.625+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|486, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.626+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.627+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.627+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1910 } 2015-04-01T16:21:58.627+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1911 } 2015-04-01T16:21:58.628+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|488, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.628+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.628+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.629+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1912 } 2015-04-01T16:21:58.629+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1913 } 2015-04-01T16:21:58.630+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1914 } 2015-04-01T16:21:58.631+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|491, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.631+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.632+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.633+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1915 } 2015-04-01T16:21:58.633+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1916 } 2015-04-01T16:21:58.633+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|493, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.635+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.636+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.637+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1917 } 2015-04-01T16:21:58.637+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1918 } 2015-04-01T16:21:58.637+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1919 } 2015-04-01T16:21:58.638+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|496, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.638+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.639+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.640+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1920 } 2015-04-01T16:21:58.640+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1921 } 2015-04-01T16:21:58.641+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1922 } 2015-04-01T16:21:58.641+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|499, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.642+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.642+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.643+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1923 } 2015-04-01T16:21:58.643+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1924 } 2015-04-01T16:21:58.644+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1925 } 2015-04-01T16:21:58.644+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|502, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.644+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:58.644+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.644+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.645+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1926 } 2015-04-01T16:21:58.645+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1927 } 2015-04-01T16:21:58.645+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|504, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.648+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.649+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.649+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1928 } 2015-04-01T16:21:58.649+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1929 } 2015-04-01T16:21:58.650+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1930 } 2015-04-01T16:21:58.650+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|507, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.652+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.653+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.653+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1931 } 2015-04-01T16:21:58.654+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1932 } 2015-04-01T16:21:58.654+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1933 } 2015-04-01T16:21:58.655+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|510, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.656+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.656+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.656+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1934 } 2015-04-01T16:21:58.657+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1935 } 2015-04-01T16:21:58.657+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1936 } 2015-04-01T16:21:58.658+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|513, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.659+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.659+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.660+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1937 } 2015-04-01T16:21:58.660+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1938 } 2015-04-01T16:21:58.661+0000 D REPL [rsBackgroundSync] bgsync buffer has 198 bytes 2015-04-01T16:21:58.661+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|515, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.661+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.662+0000 D REPL [rsSync] replication batch size is 4 2015-04-01T16:21:58.663+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1939 } 2015-04-01T16:21:58.663+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1940 } 2015-04-01T16:21:58.663+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1941 } 2015-04-01T16:21:58.664+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1942 } 2015-04-01T16:21:58.664+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|519, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.665+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.665+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.665+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1943 } 2015-04-01T16:21:58.665+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1944 } 2015-04-01T16:21:58.666+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|521, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.667+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.668+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.668+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1945 } 2015-04-01T16:21:58.668+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|522, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.670+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.671+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.672+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1946 } 2015-04-01T16:21:58.672+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1947 } 2015-04-01T16:21:58.673+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1948 } 2015-04-01T16:21:58.673+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|525, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.675+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.675+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.675+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1949 } 2015-04-01T16:21:58.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1950 } 2015-04-01T16:21:58.676+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1951 } 2015-04-01T16:21:58.677+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|528, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.678+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.678+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.678+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1952 } 2015-04-01T16:21:58.679+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1953 } 2015-04-01T16:21:58.679+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1954 } 2015-04-01T16:21:58.680+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|531, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.680+0000 D REPL [rsBackgroundSync] bgsync buffer has 99 bytes 2015-04-01T16:21:58.681+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.681+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1955 } 2015-04-01T16:21:58.682+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1956 } 2015-04-01T16:21:58.682+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|533, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.684+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.685+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.685+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1957 } 2015-04-01T16:21:58.685+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1958 } 2015-04-01T16:21:58.685+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|535, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.687+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.687+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.687+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1959 } 2015-04-01T16:21:58.687+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1960 } 2015-04-01T16:21:58.687+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|537, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.690+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.690+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.690+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1961 } 2015-04-01T16:21:58.690+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1962 } 2015-04-01T16:21:58.691+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1963 } 2015-04-01T16:21:58.691+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|540, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.693+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.693+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.693+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1964 } 2015-04-01T16:21:58.693+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1965 } 2015-04-01T16:21:58.693+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|542, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.696+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.697+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.697+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1966 } 2015-04-01T16:21:58.697+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1967 } 2015-04-01T16:21:58.697+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1968 } 2015-04-01T16:21:58.698+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|545, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.699+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.699+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.699+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1969 } 2015-04-01T16:21:58.699+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1970 } 2015-04-01T16:21:58.700+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|547, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.701+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:58.701+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.701+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.702+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1971 } 2015-04-01T16:21:58.702+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|548, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.704+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.705+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.705+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1972 } 2015-04-01T16:21:58.705+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1973 } 2015-04-01T16:21:58.705+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|550, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.707+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.708+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.708+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1974 } 2015-04-01T16:21:58.708+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1975 } 2015-04-01T16:21:58.709+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1976 } 2015-04-01T16:21:58.709+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|553, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.710+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.710+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.711+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1977 } 2015-04-01T16:21:58.711+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1978 } 2015-04-01T16:21:58.711+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|555, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.713+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.713+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.714+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1979 } 2015-04-01T16:21:58.714+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1980 } 2015-04-01T16:21:58.714+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|557, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.716+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.717+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.717+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1981 } 2015-04-01T16:21:58.717+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1982 } 2015-04-01T16:21:58.717+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1983 } 2015-04-01T16:21:58.717+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|560, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.719+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.719+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.720+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1984 } 2015-04-01T16:21:58.720+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1985 } 2015-04-01T16:21:58.720+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|562, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.722+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:58.723+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.723+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.723+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1986 } 2015-04-01T16:21:58.723+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1987 } 2015-04-01T16:21:58.724+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1988 } 2015-04-01T16:21:58.724+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|565, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.725+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.725+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.726+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1989 } 2015-04-01T16:21:58.726+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|566, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.728+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.729+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.729+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1990 } 2015-04-01T16:21:58.730+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1991 } 2015-04-01T16:21:58.730+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1992 } 2015-04-01T16:21:58.730+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|569, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.732+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.732+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:58.732+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1993 } 2015-04-01T16:21:58.732+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1994 } 2015-04-01T16:21:58.733+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1995 } 2015-04-01T16:21:58.733+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|572, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.735+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.735+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.735+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1996 } 2015-04-01T16:21:58.735+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1997 } 2015-04-01T16:21:58.736+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|574, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.738+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.739+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:58.739+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1998 } 2015-04-01T16:21:58.739+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1999 } 2015-04-01T16:21:58.739+0000 D QUERY [repl writer worker 15] Tests04011621.testcollection: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:58.739+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|576, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.753+0000 D COMMAND [conn19] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:58.753+0000 D COMMAND [conn19] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:21:58.753+0000 I COMMAND [conn19] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:58.786+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.787+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:58.787+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 0 } 2015-04-01T16:21:58.787+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|577, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.789+0000 D REPL [rsBackgroundSync] bgsync buffer has 0 bytes 2015-04-01T16:21:58.789+0000 D REPL [rsBackgroundSync] bgsync buffer has 1545 bytes 2015-04-01T16:21:58.790+0000 D REPL [rsBackgroundSync] bgsync buffer has 3090 bytes 2015-04-01T16:21:58.790+0000 D REPL [rsBackgroundSync] bgsync buffer has 4635 bytes 2015-04-01T16:21:58.790+0000 D REPL [rsBackgroundSync] bgsync buffer has 6180 bytes 2015-04-01T16:21:58.791+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.795+0000 D REPL [rsBackgroundSync] bgsync buffer has 1030 bytes 2015-04-01T16:21:58.795+0000 D REPL [rsBackgroundSync] bgsync buffer has 2575 bytes 2015-04-01T16:21:58.795+0000 D REPL [rsBackgroundSync] bgsync buffer has 4120 bytes 2015-04-01T16:21:58.795+0000 D REPL [rsBackgroundSync] bgsync buffer has 5665 bytes 2015-04-01T16:21:58.795+0000 D REPL [rsBackgroundSync] bgsync buffer has 7210 bytes 2015-04-01T16:21:58.795+0000 D REPL [rsBackgroundSync] bgsync buffer has 8755 bytes 2015-04-01T16:21:58.795+0000 D REPL [rsBackgroundSync] bgsync buffer has 10300 bytes 2015-04-01T16:21:58.808+0000 D REPL [rsBackgroundSync] bgsync buffer has 11845 bytes 2015-04-01T16:21:58.808+0000 D REPL [rsBackgroundSync] bgsync buffer has 13390 bytes 2015-04-01T16:21:58.808+0000 D REPL [rsBackgroundSync] bgsync buffer has 14935 bytes 2015-04-01T16:21:58.808+0000 D REPL [rsBackgroundSync] bgsync buffer has 16480 bytes 2015-04-01T16:21:58.808+0000 D REPL [rsBackgroundSync] bgsync buffer has 18025 bytes 2015-04-01T16:21:58.808+0000 D REPL [rsBackgroundSync] bgsync buffer has 19570 bytes 2015-04-01T16:21:58.808+0000 D REPL [rsBackgroundSync] bgsync buffer has 21115 bytes 2015-04-01T16:21:58.808+0000 D REPL [rsBackgroundSync] bgsync buffer has 22660 bytes 2015-04-01T16:21:58.808+0000 D REPL [rsBackgroundSync] bgsync buffer has 24205 bytes 2015-04-01T16:21:58.808+0000 D REPL [rsBackgroundSync] bgsync buffer has 25750 bytes 2015-04-01T16:21:58.808+0000 D REPL [rsBackgroundSync] bgsync buffer has 27295 bytes 2015-04-01T16:21:58.808+0000 D REPL [rsBackgroundSync] bgsync buffer has 28840 bytes 2015-04-01T16:21:58.808+0000 D REPL [rsBackgroundSync] bgsync buffer has 30385 bytes 2015-04-01T16:21:58.809+0000 D REPL [rsBackgroundSync] bgsync buffer has 31930 bytes 2015-04-01T16:21:58.809+0000 D REPL [rsBackgroundSync] bgsync buffer has 33475 bytes 2015-04-01T16:21:58.809+0000 D REPL [rsBackgroundSync] bgsync buffer has 35020 bytes 2015-04-01T16:21:58.809+0000 D REPL [rsBackgroundSync] bgsync buffer has 36565 bytes 2015-04-01T16:21:58.809+0000 D REPL [rsBackgroundSync] bgsync buffer has 38110 bytes 2015-04-01T16:21:58.809+0000 D REPL [rsSync] replication batch size is 65 2015-04-01T16:21:58.809+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:58.809+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 2 } 2015-04-01T16:21:58.810+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 3 } 2015-04-01T16:21:58.810+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 4 } 2015-04-01T16:21:58.810+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 5 } 2015-04-01T16:21:58.810+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 6 } 2015-04-01T16:21:58.810+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 7 } 2015-04-01T16:21:58.810+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 8 } 2015-04-01T16:21:58.810+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 9 } 2015-04-01T16:21:58.810+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 10 } 2015-04-01T16:21:58.811+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 11 } 2015-04-01T16:21:58.811+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 12 } 2015-04-01T16:21:58.811+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 13 } 2015-04-01T16:21:58.811+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 14 } 2015-04-01T16:21:58.811+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 15 } 2015-04-01T16:21:58.811+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 16 } 2015-04-01T16:21:58.811+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 17 } 2015-04-01T16:21:58.811+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 18 } 2015-04-01T16:21:58.812+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 19 } 2015-04-01T16:21:58.812+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 20 } 2015-04-01T16:21:58.812+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 21 } 2015-04-01T16:21:58.812+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 22 } 2015-04-01T16:21:58.812+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 23 } 2015-04-01T16:21:58.812+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 24 } 2015-04-01T16:21:58.812+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 25 } 2015-04-01T16:21:58.812+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 26 } 2015-04-01T16:21:58.813+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 27 } 2015-04-01T16:21:58.813+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 28 } 2015-04-01T16:21:58.813+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 29 } 2015-04-01T16:21:58.813+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 30 } 2015-04-01T16:21:58.813+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 31 } 2015-04-01T16:21:58.813+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 32 } 2015-04-01T16:21:58.813+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 33 } 2015-04-01T16:21:58.813+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 34 } 2015-04-01T16:21:58.814+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 35 } 2015-04-01T16:21:58.814+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 36 } 2015-04-01T16:21:58.814+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 37 } 2015-04-01T16:21:58.814+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 38 } 2015-04-01T16:21:58.814+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 39 } 2015-04-01T16:21:58.814+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 40 } 2015-04-01T16:21:58.814+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 41 } 2015-04-01T16:21:58.814+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 42 } 2015-04-01T16:21:58.814+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 43 } 2015-04-01T16:21:58.815+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 44 } 2015-04-01T16:21:58.815+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 45 } 2015-04-01T16:21:58.815+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 46 } 2015-04-01T16:21:58.815+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 47 } 2015-04-01T16:21:58.815+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 48 } 2015-04-01T16:21:58.815+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 49 } 2015-04-01T16:21:58.815+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 50 } 2015-04-01T16:21:58.815+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 51 } 2015-04-01T16:21:58.816+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 52 } 2015-04-01T16:21:58.816+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 53 } 2015-04-01T16:21:58.816+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 54 } 2015-04-01T16:21:58.816+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 55 } 2015-04-01T16:21:58.816+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 56 } 2015-04-01T16:21:58.816+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 57 } 2015-04-01T16:21:58.816+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 58 } 2015-04-01T16:21:58.816+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 59 } 2015-04-01T16:21:58.817+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 60 } 2015-04-01T16:21:58.817+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 61 } 2015-04-01T16:21:58.817+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 62 } 2015-04-01T16:21:58.817+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 63 } 2015-04-01T16:21:58.817+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 64 } 2015-04-01T16:21:58.817+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 65 } 2015-04-01T16:21:58.818+0000 D REPL [rsBackgroundSync] bgsync buffer has 39655 bytes 2015-04-01T16:21:58.818+0000 D REPL [rsBackgroundSync] bgsync buffer has 41200 bytes 2015-04-01T16:21:58.818+0000 D REPL [rsBackgroundSync] bgsync buffer has 42745 bytes 2015-04-01T16:21:58.818+0000 D REPL [rsBackgroundSync] bgsync buffer has 44290 bytes 2015-04-01T16:21:58.819+0000 D REPL [rsBackgroundSync] bgsync buffer has 45835 bytes 2015-04-01T16:21:58.819+0000 D REPL [rsBackgroundSync] bgsync buffer has 47380 bytes 2015-04-01T16:21:58.819+0000 D REPL [rsBackgroundSync] bgsync buffer has 48925 bytes 2015-04-01T16:21:58.819+0000 D REPL [rsBackgroundSync] bgsync buffer has 50470 bytes 2015-04-01T16:21:58.819+0000 D REPL [rsBackgroundSync] bgsync buffer has 52015 bytes 2015-04-01T16:21:58.819+0000 D REPL [rsBackgroundSync] bgsync buffer has 53560 bytes 2015-04-01T16:21:58.819+0000 D REPL [rsBackgroundSync] bgsync buffer has 55105 bytes 2015-04-01T16:21:58.819+0000 D REPL [rsBackgroundSync] bgsync buffer has 56650 bytes 2015-04-01T16:21:58.819+0000 D REPL [rsBackgroundSync] bgsync buffer has 58195 bytes 2015-04-01T16:21:58.819+0000 D REPL [rsBackgroundSync] bgsync buffer has 59740 bytes 2015-04-01T16:21:58.819+0000 D REPL [rsBackgroundSync] bgsync buffer has 61285 bytes 2015-04-01T16:21:58.825+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:58.825+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|642, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.872+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:21:58.880+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:21:58.881+0000 D REPL [rsSync] replication batch size is 603 2015-04-01T16:21:58.881+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:21:58.881+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 66 } 2015-04-01T16:21:58.881+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 67 } 2015-04-01T16:21:58.881+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 68 } 2015-04-01T16:21:58.881+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 69 } 2015-04-01T16:21:58.882+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 70 } 2015-04-01T16:21:58.882+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 71 } 2015-04-01T16:21:58.882+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 72 } 2015-04-01T16:21:58.882+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 73 } 2015-04-01T16:21:58.882+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 74 } 2015-04-01T16:21:58.883+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 75 } 2015-04-01T16:21:58.883+0000 D REPL [rsBackgroundSync] bgsync buffer has 721 bytes 2015-04-01T16:21:58.883+0000 D REPL [rsBackgroundSync] bgsync buffer has 2266 bytes 2015-04-01T16:21:58.883+0000 D REPL [rsBackgroundSync] bgsync buffer has 3811 bytes 2015-04-01T16:21:58.883+0000 D REPL [rsBackgroundSync] bgsync buffer has 5356 bytes 2015-04-01T16:21:58.883+0000 D REPL [rsBackgroundSync] bgsync buffer has 6901 bytes 2015-04-01T16:21:58.883+0000 D REPL [rsBackgroundSync] bgsync buffer has 8446 bytes 2015-04-01T16:21:58.883+0000 D REPL [rsBackgroundSync] bgsync buffer has 9991 bytes 2015-04-01T16:21:58.883+0000 D REPL [rsBackgroundSync] bgsync buffer has 11536 bytes 2015-04-01T16:21:58.883+0000 D REPL [rsBackgroundSync] bgsync buffer has 13081 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 14626 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 16171 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 17716 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 19261 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 20806 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 22351 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 23896 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 25441 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 26986 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 28531 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 30076 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 31621 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 33166 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 34711 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 36256 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 37801 bytes 2015-04-01T16:21:58.884+0000 D REPL [rsBackgroundSync] bgsync buffer has 39346 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 40891 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 42436 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 43981 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 45526 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 47071 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 48616 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 50161 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 51706 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 53251 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 54796 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 56341 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 57886 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 59431 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 60976 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 62521 bytes 2015-04-01T16:21:58.885+0000 D REPL [rsBackgroundSync] bgsync buffer has 64066 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 65611 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 67156 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 68701 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 70246 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 71791 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 73336 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 74881 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 76426 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 77971 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 79516 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 81061 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 82606 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 84151 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 85696 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 87241 bytes 2015-04-01T16:21:58.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 88786 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 90331 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 91876 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 93421 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 94966 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 96511 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 98056 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 99601 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 101146 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 102691 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 104236 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 105781 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 107326 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 108871 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 110416 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 111961 bytes 2015-04-01T16:21:58.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 113506 bytes 2015-04-01T16:21:58.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 115051 bytes 2015-04-01T16:21:58.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 116596 bytes 2015-04-01T16:21:58.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 118141 bytes 2015-04-01T16:21:58.888+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 76 } 2015-04-01T16:21:58.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 119686 bytes 2015-04-01T16:21:58.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 121231 bytes 2015-04-01T16:21:58.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 122776 bytes 2015-04-01T16:21:58.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 124321 bytes 2015-04-01T16:21:58.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 125866 bytes 2015-04-01T16:21:58.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 127411 bytes 2015-04-01T16:21:58.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 128956 bytes 2015-04-01T16:21:58.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 130501 bytes 2015-04-01T16:21:58.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 132046 bytes 2015-04-01T16:21:58.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 133591 bytes 2015-04-01T16:21:58.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 135136 bytes 2015-04-01T16:21:58.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 136681 bytes 2015-04-01T16:21:58.888+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 77 } 2015-04-01T16:21:58.888+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:00.888Z 2015-04-01T16:21:58.889+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 78 } 2015-04-01T16:21:58.889+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 79 } 2015-04-01T16:21:58.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 80 } 2015-04-01T16:21:58.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 81 } 2015-04-01T16:21:58.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 82 } 2015-04-01T16:21:58.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 83 } 2015-04-01T16:21:58.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 84 } 2015-04-01T16:21:58.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 85 } 2015-04-01T16:21:58.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 86 } 2015-04-01T16:21:58.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 87 } 2015-04-01T16:21:58.890+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 88 } 2015-04-01T16:21:58.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 89 } 2015-04-01T16:21:58.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 90 } 2015-04-01T16:21:58.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 91 } 2015-04-01T16:21:58.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 92 } 2015-04-01T16:21:58.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 93 } 2015-04-01T16:21:58.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 94 } 2015-04-01T16:21:58.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 95 } 2015-04-01T16:21:58.891+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 96 } 2015-04-01T16:21:58.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 97 } 2015-04-01T16:21:58.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 98 } 2015-04-01T16:21:58.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 99 } 2015-04-01T16:21:58.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 100 } 2015-04-01T16:21:58.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 101 } 2015-04-01T16:21:58.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 102 } 2015-04-01T16:21:58.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 103 } 2015-04-01T16:21:58.892+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 104 } 2015-04-01T16:21:58.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 105 } 2015-04-01T16:21:58.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 106 } 2015-04-01T16:21:58.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 107 } 2015-04-01T16:21:58.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 108 } 2015-04-01T16:21:58.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 109 } 2015-04-01T16:21:58.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 110 } 2015-04-01T16:21:58.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 111 } 2015-04-01T16:21:58.893+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 112 } 2015-04-01T16:21:58.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 113 } 2015-04-01T16:21:58.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 114 } 2015-04-01T16:21:58.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 115 } 2015-04-01T16:21:58.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 116 } 2015-04-01T16:21:58.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 117 } 2015-04-01T16:21:58.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 118 } 2015-04-01T16:21:58.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 119 } 2015-04-01T16:21:58.894+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 120 } 2015-04-01T16:21:58.895+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 121 } 2015-04-01T16:21:58.895+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 122 } 2015-04-01T16:21:58.895+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 123 } 2015-04-01T16:21:58.895+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 124 } 2015-04-01T16:21:58.895+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 125 } 2015-04-01T16:21:58.895+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 126 } 2015-04-01T16:21:58.895+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 127 } 2015-04-01T16:21:58.895+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 128 } 2015-04-01T16:21:58.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 129 } 2015-04-01T16:21:58.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 130 } 2015-04-01T16:21:58.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 131 } 2015-04-01T16:21:58.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 132 } 2015-04-01T16:21:58.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 133 } 2015-04-01T16:21:58.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 134 } 2015-04-01T16:21:58.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 135 } 2015-04-01T16:21:58.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 136 } 2015-04-01T16:21:58.896+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 137 } 2015-04-01T16:21:58.897+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 138 } 2015-04-01T16:21:58.897+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 139 } 2015-04-01T16:21:58.897+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 140 } 2015-04-01T16:21:58.897+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 141 } 2015-04-01T16:21:58.897+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 142 } 2015-04-01T16:21:58.897+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 143 } 2015-04-01T16:21:58.897+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 144 } 2015-04-01T16:21:58.897+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 145 } 2015-04-01T16:21:58.898+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 146 } 2015-04-01T16:21:58.898+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 147 } 2015-04-01T16:21:58.898+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 148 } 2015-04-01T16:21:58.898+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 149 } 2015-04-01T16:21:58.898+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 150 } 2015-04-01T16:21:58.898+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 151 } 2015-04-01T16:21:58.898+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 152 } 2015-04-01T16:21:58.898+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 153 } 2015-04-01T16:21:58.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 154 } 2015-04-01T16:21:58.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 155 } 2015-04-01T16:21:58.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 156 } 2015-04-01T16:21:58.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 157 } 2015-04-01T16:21:58.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 158 } 2015-04-01T16:21:58.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 159 } 2015-04-01T16:21:58.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 160 } 2015-04-01T16:21:58.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 161 } 2015-04-01T16:21:58.899+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 162 } 2015-04-01T16:21:58.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 163 } 2015-04-01T16:21:58.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 164 } 2015-04-01T16:21:58.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 165 } 2015-04-01T16:21:58.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 166 } 2015-04-01T16:21:58.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 167 } 2015-04-01T16:21:58.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 168 } 2015-04-01T16:21:58.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 169 } 2015-04-01T16:21:58.900+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 170 } 2015-04-01T16:21:58.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 171 } 2015-04-01T16:21:58.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 172 } 2015-04-01T16:21:58.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 173 } 2015-04-01T16:21:58.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 174 } 2015-04-01T16:21:58.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 175 } 2015-04-01T16:21:58.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 176 } 2015-04-01T16:21:58.901+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 177 } 2015-04-01T16:21:58.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 178 } 2015-04-01T16:21:58.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 179 } 2015-04-01T16:21:58.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 180 } 2015-04-01T16:21:58.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 181 } 2015-04-01T16:21:58.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 182 } 2015-04-01T16:21:58.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 183 } 2015-04-01T16:21:58.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 184 } 2015-04-01T16:21:58.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 185 } 2015-04-01T16:21:58.902+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 186 } 2015-04-01T16:21:58.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 187 } 2015-04-01T16:21:58.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 188 } 2015-04-01T16:21:58.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 189 } 2015-04-01T16:21:58.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 190 } 2015-04-01T16:21:58.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 191 } 2015-04-01T16:21:58.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 192 } 2015-04-01T16:21:58.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 193 } 2015-04-01T16:21:58.903+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 194 } 2015-04-01T16:21:58.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 195 } 2015-04-01T16:21:58.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 196 } 2015-04-01T16:21:58.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 197 } 2015-04-01T16:21:58.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 198 } 2015-04-01T16:21:58.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 199 } 2015-04-01T16:21:58.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 200 } 2015-04-01T16:21:58.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 201 } 2015-04-01T16:21:58.904+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 202 } 2015-04-01T16:21:58.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 203 } 2015-04-01T16:21:58.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 204 } 2015-04-01T16:21:58.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 205 } 2015-04-01T16:21:58.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 206 } 2015-04-01T16:21:58.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 207 } 2015-04-01T16:21:58.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 208 } 2015-04-01T16:21:58.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 209 } 2015-04-01T16:21:58.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 210 } 2015-04-01T16:21:58.905+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 211 } 2015-04-01T16:21:58.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 212 } 2015-04-01T16:21:58.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 213 } 2015-04-01T16:21:58.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 214 } 2015-04-01T16:21:58.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 215 } 2015-04-01T16:21:58.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 216 } 2015-04-01T16:21:58.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 217 } 2015-04-01T16:21:58.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 218 } 2015-04-01T16:21:58.906+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 219 } 2015-04-01T16:21:58.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 220 } 2015-04-01T16:21:58.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 221 } 2015-04-01T16:21:58.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 222 } 2015-04-01T16:21:58.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 223 } 2015-04-01T16:21:58.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 224 } 2015-04-01T16:21:58.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 225 } 2015-04-01T16:21:58.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 226 } 2015-04-01T16:21:58.907+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 227 } 2015-04-01T16:21:58.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 228 } 2015-04-01T16:21:58.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 229 } 2015-04-01T16:21:58.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 230 } 2015-04-01T16:21:58.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 231 } 2015-04-01T16:21:58.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 232 } 2015-04-01T16:21:58.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 233 } 2015-04-01T16:21:58.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 234 } 2015-04-01T16:21:58.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 235 } 2015-04-01T16:21:58.908+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 236 } 2015-04-01T16:21:58.909+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 237 } 2015-04-01T16:21:58.909+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 238 } 2015-04-01T16:21:58.909+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 239 } 2015-04-01T16:21:58.909+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 240 } 2015-04-01T16:21:58.909+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 241 } 2015-04-01T16:21:58.909+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 242 } 2015-04-01T16:21:58.909+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 243 } 2015-04-01T16:21:58.909+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 244 } 2015-04-01T16:21:58.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 245 } 2015-04-01T16:21:58.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 246 } 2015-04-01T16:21:58.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 247 } 2015-04-01T16:21:58.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 248 } 2015-04-01T16:21:58.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 249 } 2015-04-01T16:21:58.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 250 } 2015-04-01T16:21:58.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 251 } 2015-04-01T16:21:58.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 252 } 2015-04-01T16:21:58.910+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 253 } 2015-04-01T16:21:58.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 254 } 2015-04-01T16:21:58.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 255 } 2015-04-01T16:21:58.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 256 } 2015-04-01T16:21:58.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 257 } 2015-04-01T16:21:58.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 258 } 2015-04-01T16:21:58.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 259 } 2015-04-01T16:21:58.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 260 } 2015-04-01T16:21:58.911+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 261 } 2015-04-01T16:21:58.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 262 } 2015-04-01T16:21:58.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 263 } 2015-04-01T16:21:58.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 264 } 2015-04-01T16:21:58.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 265 } 2015-04-01T16:21:58.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 266 } 2015-04-01T16:21:58.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 267 } 2015-04-01T16:21:58.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 268 } 2015-04-01T16:21:58.912+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 269 } 2015-04-01T16:21:58.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 270 } 2015-04-01T16:21:58.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 271 } 2015-04-01T16:21:58.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 272 } 2015-04-01T16:21:58.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 273 } 2015-04-01T16:21:58.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 274 } 2015-04-01T16:21:58.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 275 } 2015-04-01T16:21:58.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 276 } 2015-04-01T16:21:58.913+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 277 } 2015-04-01T16:21:58.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 278 } 2015-04-01T16:21:58.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 279 } 2015-04-01T16:21:58.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 280 } 2015-04-01T16:21:58.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 281 } 2015-04-01T16:21:58.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 282 } 2015-04-01T16:21:58.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 283 } 2015-04-01T16:21:58.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 284 } 2015-04-01T16:21:58.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 285 } 2015-04-01T16:21:58.914+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 286 } 2015-04-01T16:21:58.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 287 } 2015-04-01T16:21:58.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 288 } 2015-04-01T16:21:58.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 289 } 2015-04-01T16:21:58.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 290 } 2015-04-01T16:21:58.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 291 } 2015-04-01T16:21:58.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 292 } 2015-04-01T16:21:58.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 293 } 2015-04-01T16:21:58.915+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 294 } 2015-04-01T16:21:58.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 295 } 2015-04-01T16:21:58.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 296 } 2015-04-01T16:21:58.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 297 } 2015-04-01T16:21:58.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 298 } 2015-04-01T16:21:58.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 299 } 2015-04-01T16:21:58.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 300 } 2015-04-01T16:21:58.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 301 } 2015-04-01T16:21:58.916+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 302 } 2015-04-01T16:21:58.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 303 } 2015-04-01T16:21:58.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 304 } 2015-04-01T16:21:58.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 305 } 2015-04-01T16:21:58.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 306 } 2015-04-01T16:21:58.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 307 } 2015-04-01T16:21:58.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 308 } 2015-04-01T16:21:58.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 309 } 2015-04-01T16:21:58.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 310 } 2015-04-01T16:21:58.917+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 311 } 2015-04-01T16:21:58.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 312 } 2015-04-01T16:21:58.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 313 } 2015-04-01T16:21:58.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 314 } 2015-04-01T16:21:58.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 315 } 2015-04-01T16:21:58.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 316 } 2015-04-01T16:21:58.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 317 } 2015-04-01T16:21:58.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 318 } 2015-04-01T16:21:58.918+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 319 } 2015-04-01T16:21:58.919+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 320 } 2015-04-01T16:21:58.919+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 321 } 2015-04-01T16:21:58.919+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 322 } 2015-04-01T16:21:58.919+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 323 } 2015-04-01T16:21:58.919+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 324 } 2015-04-01T16:21:58.919+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 325 } 2015-04-01T16:21:58.919+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 326 } 2015-04-01T16:21:58.919+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 327 } 2015-04-01T16:21:58.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 328 } 2015-04-01T16:21:58.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 329 } 2015-04-01T16:21:58.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 330 } 2015-04-01T16:21:58.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 331 } 2015-04-01T16:21:58.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 332 } 2015-04-01T16:21:58.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 333 } 2015-04-01T16:21:58.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 334 } 2015-04-01T16:21:58.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 335 } 2015-04-01T16:21:58.920+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 336 } 2015-04-01T16:21:58.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 337 } 2015-04-01T16:21:58.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 338 } 2015-04-01T16:21:58.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 339 } 2015-04-01T16:21:58.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 340 } 2015-04-01T16:21:58.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 341 } 2015-04-01T16:21:58.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 342 } 2015-04-01T16:21:58.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 343 } 2015-04-01T16:21:58.921+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 344 } 2015-04-01T16:21:58.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 345 } 2015-04-01T16:21:58.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 346 } 2015-04-01T16:21:58.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 347 } 2015-04-01T16:21:58.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 348 } 2015-04-01T16:21:58.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 349 } 2015-04-01T16:21:58.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 350 } 2015-04-01T16:21:58.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 351 } 2015-04-01T16:21:58.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 352 } 2015-04-01T16:21:58.922+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 353 } 2015-04-01T16:21:58.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 354 } 2015-04-01T16:21:58.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 355 } 2015-04-01T16:21:58.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 356 } 2015-04-01T16:21:58.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 357 } 2015-04-01T16:21:58.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 358 } 2015-04-01T16:21:58.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 359 } 2015-04-01T16:21:58.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 360 } 2015-04-01T16:21:58.923+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 361 } 2015-04-01T16:21:58.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 362 } 2015-04-01T16:21:58.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 363 } 2015-04-01T16:21:58.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 364 } 2015-04-01T16:21:58.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 365 } 2015-04-01T16:21:58.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 366 } 2015-04-01T16:21:58.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 367 } 2015-04-01T16:21:58.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 368 } 2015-04-01T16:21:58.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 369 } 2015-04-01T16:21:58.924+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 370 } 2015-04-01T16:21:58.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 371 } 2015-04-01T16:21:58.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 372 } 2015-04-01T16:21:58.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 373 } 2015-04-01T16:21:58.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 374 } 2015-04-01T16:21:58.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 375 } 2015-04-01T16:21:58.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 376 } 2015-04-01T16:21:58.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 377 } 2015-04-01T16:21:58.925+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 378 } 2015-04-01T16:21:58.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 379 } 2015-04-01T16:21:58.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 380 } 2015-04-01T16:21:58.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 381 } 2015-04-01T16:21:58.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 382 } 2015-04-01T16:21:58.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 383 } 2015-04-01T16:21:58.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 384 } 2015-04-01T16:21:58.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 385 } 2015-04-01T16:21:58.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 386 } 2015-04-01T16:21:58.926+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 387 } 2015-04-01T16:21:58.927+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 388 } 2015-04-01T16:21:58.927+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 389 } 2015-04-01T16:21:58.927+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 390 } 2015-04-01T16:21:58.927+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 391 } 2015-04-01T16:21:58.927+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 392 } 2015-04-01T16:21:58.927+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 393 } 2015-04-01T16:21:58.927+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 394 } 2015-04-01T16:21:58.927+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 395 } 2015-04-01T16:21:58.927+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 396 } 2015-04-01T16:21:58.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 397 } 2015-04-01T16:21:58.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 398 } 2015-04-01T16:21:58.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 399 } 2015-04-01T16:21:58.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 400 } 2015-04-01T16:21:58.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 401 } 2015-04-01T16:21:58.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 402 } 2015-04-01T16:21:58.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 403 } 2015-04-01T16:21:58.928+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 404 } 2015-04-01T16:21:58.929+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 405 } 2015-04-01T16:21:58.929+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 406 } 2015-04-01T16:21:58.929+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 407 } 2015-04-01T16:21:58.929+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 408 } 2015-04-01T16:21:58.929+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 409 } 2015-04-01T16:21:58.929+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 410 } 2015-04-01T16:21:58.929+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 411 } 2015-04-01T16:21:58.929+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 412 } 2015-04-01T16:21:58.929+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 413 } 2015-04-01T16:21:58.930+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 414 } 2015-04-01T16:21:58.930+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 415 } 2015-04-01T16:21:58.930+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 416 } 2015-04-01T16:21:58.930+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 417 } 2015-04-01T16:21:58.930+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 418 } 2015-04-01T16:21:58.930+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 419 } 2015-04-01T16:21:58.930+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 420 } 2015-04-01T16:21:58.930+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 421 } 2015-04-01T16:21:58.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 422 } 2015-04-01T16:21:58.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 423 } 2015-04-01T16:21:58.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 424 } 2015-04-01T16:21:58.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 425 } 2015-04-01T16:21:58.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 426 } 2015-04-01T16:21:58.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 427 } 2015-04-01T16:21:58.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 428 } 2015-04-01T16:21:58.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 429 } 2015-04-01T16:21:58.931+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 430 } 2015-04-01T16:21:58.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 431 } 2015-04-01T16:21:58.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 432 } 2015-04-01T16:21:58.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 433 } 2015-04-01T16:21:58.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 434 } 2015-04-01T16:21:58.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 435 } 2015-04-01T16:21:58.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 436 } 2015-04-01T16:21:58.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 437 } 2015-04-01T16:21:58.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 438 } 2015-04-01T16:21:58.932+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 439 } 2015-04-01T16:21:58.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 440 } 2015-04-01T16:21:58.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 441 } 2015-04-01T16:21:58.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 442 } 2015-04-01T16:21:58.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 443 } 2015-04-01T16:21:58.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 444 } 2015-04-01T16:21:58.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 445 } 2015-04-01T16:21:58.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 446 } 2015-04-01T16:21:58.933+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 447 } 2015-04-01T16:21:58.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 448 } 2015-04-01T16:21:58.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 449 } 2015-04-01T16:21:58.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 450 } 2015-04-01T16:21:58.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 451 } 2015-04-01T16:21:58.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 452 } 2015-04-01T16:21:58.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 453 } 2015-04-01T16:21:58.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 454 } 2015-04-01T16:21:58.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 455 } 2015-04-01T16:21:58.934+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 456 } 2015-04-01T16:21:58.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 457 } 2015-04-01T16:21:58.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 458 } 2015-04-01T16:21:58.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 459 } 2015-04-01T16:21:58.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 460 } 2015-04-01T16:21:58.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 461 } 2015-04-01T16:21:58.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 462 } 2015-04-01T16:21:58.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 463 } 2015-04-01T16:21:58.935+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 464 } 2015-04-01T16:21:58.936+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 465 } 2015-04-01T16:21:58.936+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 466 } 2015-04-01T16:21:58.936+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 467 } 2015-04-01T16:21:58.936+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 468 } 2015-04-01T16:21:58.936+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 469 } 2015-04-01T16:21:58.936+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 470 } 2015-04-01T16:21:58.936+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 471 } 2015-04-01T16:21:58.936+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 472 } 2015-04-01T16:21:58.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 473 } 2015-04-01T16:21:58.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 474 } 2015-04-01T16:21:58.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 475 } 2015-04-01T16:21:58.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 476 } 2015-04-01T16:21:58.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 477 } 2015-04-01T16:21:58.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 478 } 2015-04-01T16:21:58.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 479 } 2015-04-01T16:21:58.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 480 } 2015-04-01T16:21:58.937+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 481 } 2015-04-01T16:21:58.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 482 } 2015-04-01T16:21:58.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 483 } 2015-04-01T16:21:58.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 484 } 2015-04-01T16:21:58.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 485 } 2015-04-01T16:21:58.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 486 } 2015-04-01T16:21:58.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 487 } 2015-04-01T16:21:58.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 488 } 2015-04-01T16:21:58.938+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 489 } 2015-04-01T16:21:58.939+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 490 } 2015-04-01T16:21:58.939+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 491 } 2015-04-01T16:21:58.939+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 492 } 2015-04-01T16:21:58.939+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 493 } 2015-04-01T16:21:58.939+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 494 } 2015-04-01T16:21:58.939+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 495 } 2015-04-01T16:21:58.939+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 496 } 2015-04-01T16:21:58.939+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 497 } 2015-04-01T16:21:58.939+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 498 } 2015-04-01T16:21:58.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 499 } 2015-04-01T16:21:58.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 500 } 2015-04-01T16:21:58.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 501 } 2015-04-01T16:21:58.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 502 } 2015-04-01T16:21:58.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 503 } 2015-04-01T16:21:58.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 504 } 2015-04-01T16:21:58.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 505 } 2015-04-01T16:21:58.940+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 506 } 2015-04-01T16:21:58.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 507 } 2015-04-01T16:21:58.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 508 } 2015-04-01T16:21:58.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 509 } 2015-04-01T16:21:58.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 510 } 2015-04-01T16:21:58.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 511 } 2015-04-01T16:21:58.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 512 } 2015-04-01T16:21:58.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 513 } 2015-04-01T16:21:58.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 514 } 2015-04-01T16:21:58.941+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 515 } 2015-04-01T16:21:58.942+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 516 } 2015-04-01T16:21:58.942+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 517 } 2015-04-01T16:21:58.942+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 518 } 2015-04-01T16:21:58.942+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 519 } 2015-04-01T16:21:58.942+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 520 } 2015-04-01T16:21:58.942+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 521 } 2015-04-01T16:21:58.942+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 522 } 2015-04-01T16:21:58.942+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 523 } 2015-04-01T16:21:58.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 524 } 2015-04-01T16:21:58.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 525 } 2015-04-01T16:21:58.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 526 } 2015-04-01T16:21:58.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 527 } 2015-04-01T16:21:58.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 528 } 2015-04-01T16:21:58.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 529 } 2015-04-01T16:21:58.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 530 } 2015-04-01T16:21:58.943+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 531 } 2015-04-01T16:21:58.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 532 } 2015-04-01T16:21:58.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 533 } 2015-04-01T16:21:58.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 534 } 2015-04-01T16:21:58.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 535 } 2015-04-01T16:21:58.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 536 } 2015-04-01T16:21:58.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 537 } 2015-04-01T16:21:58.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 538 } 2015-04-01T16:21:58.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 539 } 2015-04-01T16:21:58.944+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 540 } 2015-04-01T16:21:58.945+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 541 } 2015-04-01T16:21:58.945+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 542 } 2015-04-01T16:21:58.945+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 543 } 2015-04-01T16:21:58.945+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 544 } 2015-04-01T16:21:58.945+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 545 } 2015-04-01T16:21:58.945+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 546 } 2015-04-01T16:21:58.945+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 547 } 2015-04-01T16:21:58.945+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 548 } 2015-04-01T16:21:58.945+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 549 } 2015-04-01T16:21:58.946+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 550 } 2015-04-01T16:21:58.946+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 551 } 2015-04-01T16:21:58.946+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 552 } 2015-04-01T16:21:58.946+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 553 } 2015-04-01T16:21:58.946+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 554 } 2015-04-01T16:21:58.946+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 555 } 2015-04-01T16:21:58.946+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 556 } 2015-04-01T16:21:58.946+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 557 } 2015-04-01T16:21:58.947+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 558 } 2015-04-01T16:21:58.947+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 559 } 2015-04-01T16:21:58.947+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 560 } 2015-04-01T16:21:58.947+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 561 } 2015-04-01T16:21:58.947+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 562 } 2015-04-01T16:21:58.947+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 563 } 2015-04-01T16:21:58.947+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 564 } 2015-04-01T16:21:58.947+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 565 } 2015-04-01T16:21:58.948+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 566 } 2015-04-01T16:21:58.948+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 567 } 2015-04-01T16:21:58.948+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 568 } 2015-04-01T16:21:58.948+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 569 } 2015-04-01T16:21:58.948+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 570 } 2015-04-01T16:21:58.948+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 571 } 2015-04-01T16:21:58.948+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 572 } 2015-04-01T16:21:58.948+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 573 } 2015-04-01T16:21:58.948+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 574 } 2015-04-01T16:21:58.949+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 575 } 2015-04-01T16:21:58.949+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 576 } 2015-04-01T16:21:58.949+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 577 } 2015-04-01T16:21:58.949+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 578 } 2015-04-01T16:21:58.949+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 579 } 2015-04-01T16:21:58.949+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 580 } 2015-04-01T16:21:58.949+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 581 } 2015-04-01T16:21:58.949+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 582 } 2015-04-01T16:21:58.949+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 583 } 2015-04-01T16:21:58.950+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 584 } 2015-04-01T16:21:58.950+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 585 } 2015-04-01T16:21:58.950+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 586 } 2015-04-01T16:21:58.950+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 587 } 2015-04-01T16:21:58.950+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 588 } 2015-04-01T16:21:58.950+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 589 } 2015-04-01T16:21:58.950+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 590 } 2015-04-01T16:21:58.950+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 591 } 2015-04-01T16:21:58.951+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 592 } 2015-04-01T16:21:58.951+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 593 } 2015-04-01T16:21:58.951+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 594 } 2015-04-01T16:21:58.951+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 595 } 2015-04-01T16:21:58.951+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 596 } 2015-04-01T16:21:58.951+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 597 } 2015-04-01T16:21:58.951+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 598 } 2015-04-01T16:21:58.951+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 599 } 2015-04-01T16:21:58.952+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 600 } 2015-04-01T16:21:58.952+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 601 } 2015-04-01T16:21:58.952+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 602 } 2015-04-01T16:21:58.952+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 603 } 2015-04-01T16:21:58.952+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 604 } 2015-04-01T16:21:58.952+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 605 } 2015-04-01T16:21:58.952+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 606 } 2015-04-01T16:21:58.952+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 607 } 2015-04-01T16:21:58.952+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 608 } 2015-04-01T16:21:58.953+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 609 } 2015-04-01T16:21:58.953+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 610 } 2015-04-01T16:21:58.953+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 611 } 2015-04-01T16:21:58.953+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 612 } 2015-04-01T16:21:58.953+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 613 } 2015-04-01T16:21:58.953+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 614 } 2015-04-01T16:21:58.953+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 615 } 2015-04-01T16:21:58.953+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 616 } 2015-04-01T16:21:58.954+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 617 } 2015-04-01T16:21:58.954+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 618 } 2015-04-01T16:21:58.954+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 619 } 2015-04-01T16:21:58.954+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 620 } 2015-04-01T16:21:58.954+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 621 } 2015-04-01T16:21:58.954+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 622 } 2015-04-01T16:21:58.954+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 623 } 2015-04-01T16:21:58.954+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 624 } 2015-04-01T16:21:58.955+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 625 } 2015-04-01T16:21:58.955+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 626 } 2015-04-01T16:21:58.955+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 627 } 2015-04-01T16:21:58.955+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 628 } 2015-04-01T16:21:58.955+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 629 } 2015-04-01T16:21:58.955+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 630 } 2015-04-01T16:21:58.955+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 631 } 2015-04-01T16:21:58.955+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 632 } 2015-04-01T16:21:58.956+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 633 } 2015-04-01T16:21:58.956+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 634 } 2015-04-01T16:21:58.956+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 635 } 2015-04-01T16:21:58.956+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 636 } 2015-04-01T16:21:58.956+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 637 } 2015-04-01T16:21:58.956+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 638 } 2015-04-01T16:21:58.956+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 639 } 2015-04-01T16:21:58.956+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 640 } 2015-04-01T16:21:58.956+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 641 } 2015-04-01T16:21:58.957+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 642 } 2015-04-01T16:21:58.957+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 643 } 2015-04-01T16:21:58.957+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 644 } 2015-04-01T16:21:58.957+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 645 } 2015-04-01T16:21:58.957+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 646 } 2015-04-01T16:21:58.957+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 647 } 2015-04-01T16:21:58.957+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 648 } 2015-04-01T16:21:58.957+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 649 } 2015-04-01T16:21:58.958+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 650 } 2015-04-01T16:21:58.958+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 651 } 2015-04-01T16:21:58.958+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 652 } 2015-04-01T16:21:58.958+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 653 } 2015-04-01T16:21:58.958+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 654 } 2015-04-01T16:21:58.958+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 655 } 2015-04-01T16:21:58.958+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 656 } 2015-04-01T16:21:58.958+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 657 } 2015-04-01T16:21:58.959+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 658 } 2015-04-01T16:21:58.959+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 659 } 2015-04-01T16:21:58.959+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 660 } 2015-04-01T16:21:58.959+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 661 } 2015-04-01T16:21:58.959+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 662 } 2015-04-01T16:21:58.959+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 663 } 2015-04-01T16:21:58.959+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 664 } 2015-04-01T16:21:58.959+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 665 } 2015-04-01T16:21:58.959+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 666 } 2015-04-01T16:21:58.960+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 667 } 2015-04-01T16:21:58.960+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 668 } 2015-04-01T16:21:58.980+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|1245, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:58.981+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.038+0000 D REPL [rsSync] replication batch size is 1332 2015-04-01T16:21:59.039+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 669 } 2015-04-01T16:21:59.039+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 670 } 2015-04-01T16:21:59.039+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 671 } 2015-04-01T16:21:59.039+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 672 } 2015-04-01T16:21:59.039+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 673 } 2015-04-01T16:21:59.039+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 674 } 2015-04-01T16:21:59.039+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 675 } 2015-04-01T16:21:59.039+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 676 } 2015-04-01T16:21:59.040+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 677 } 2015-04-01T16:21:59.040+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 678 } 2015-04-01T16:21:59.040+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 679 } 2015-04-01T16:21:59.040+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 680 } 2015-04-01T16:21:59.040+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 681 } 2015-04-01T16:21:59.040+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 682 } 2015-04-01T16:21:59.040+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 683 } 2015-04-01T16:21:59.040+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 684 } 2015-04-01T16:21:59.041+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 685 } 2015-04-01T16:21:59.041+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 686 } 2015-04-01T16:21:59.041+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 687 } 2015-04-01T16:21:59.041+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 688 } 2015-04-01T16:21:59.041+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 689 } 2015-04-01T16:21:59.041+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 690 } 2015-04-01T16:21:59.041+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 691 } 2015-04-01T16:21:59.041+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 692 } 2015-04-01T16:21:59.041+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 693 } 2015-04-01T16:21:59.042+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 694 } 2015-04-01T16:21:59.042+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 695 } 2015-04-01T16:21:59.042+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 696 } 2015-04-01T16:21:59.042+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 697 } 2015-04-01T16:21:59.042+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 698 } 2015-04-01T16:21:59.042+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 699 } 2015-04-01T16:21:59.042+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 700 } 2015-04-01T16:21:59.042+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 701 } 2015-04-01T16:21:59.043+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 702 } 2015-04-01T16:21:59.043+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 703 } 2015-04-01T16:21:59.043+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 704 } 2015-04-01T16:21:59.043+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 705 } 2015-04-01T16:21:59.043+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 706 } 2015-04-01T16:21:59.043+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 707 } 2015-04-01T16:21:59.043+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 708 } 2015-04-01T16:21:59.043+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 709 } 2015-04-01T16:21:59.043+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 710 } 2015-04-01T16:21:59.044+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 711 } 2015-04-01T16:21:59.044+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 712 } 2015-04-01T16:21:59.044+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 713 } 2015-04-01T16:21:59.044+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 714 } 2015-04-01T16:21:59.044+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 715 } 2015-04-01T16:21:59.044+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 716 } 2015-04-01T16:21:59.044+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 717 } 2015-04-01T16:21:59.044+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 718 } 2015-04-01T16:21:59.044+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 719 } 2015-04-01T16:21:59.045+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 720 } 2015-04-01T16:21:59.045+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 721 } 2015-04-01T16:21:59.045+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 722 } 2015-04-01T16:21:59.045+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 723 } 2015-04-01T16:21:59.045+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 724 } 2015-04-01T16:21:59.045+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 725 } 2015-04-01T16:21:59.045+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 726 } 2015-04-01T16:21:59.045+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 727 } 2015-04-01T16:21:59.046+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 728 } 2015-04-01T16:21:59.046+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 729 } 2015-04-01T16:21:59.046+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 730 } 2015-04-01T16:21:59.046+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 731 } 2015-04-01T16:21:59.046+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 732 } 2015-04-01T16:21:59.046+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 733 } 2015-04-01T16:21:59.046+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 734 } 2015-04-01T16:21:59.046+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 735 } 2015-04-01T16:21:59.046+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 736 } 2015-04-01T16:21:59.047+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 737 } 2015-04-01T16:21:59.047+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 738 } 2015-04-01T16:21:59.047+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 739 } 2015-04-01T16:21:59.047+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 740 } 2015-04-01T16:21:59.047+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 741 } 2015-04-01T16:21:59.047+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 742 } 2015-04-01T16:21:59.047+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 743 } 2015-04-01T16:21:59.047+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 744 } 2015-04-01T16:21:59.048+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 745 } 2015-04-01T16:21:59.048+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 746 } 2015-04-01T16:21:59.048+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 747 } 2015-04-01T16:21:59.048+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 748 } 2015-04-01T16:21:59.048+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 749 } 2015-04-01T16:21:59.048+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 750 } 2015-04-01T16:21:59.048+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 751 } 2015-04-01T16:21:59.048+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 752 } 2015-04-01T16:21:59.048+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 753 } 2015-04-01T16:21:59.049+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 754 } 2015-04-01T16:21:59.049+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 755 } 2015-04-01T16:21:59.049+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 756 } 2015-04-01T16:21:59.049+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 757 } 2015-04-01T16:21:59.049+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 758 } 2015-04-01T16:21:59.049+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 759 } 2015-04-01T16:21:59.049+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 760 } 2015-04-01T16:21:59.049+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 761 } 2015-04-01T16:21:59.050+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 762 } 2015-04-01T16:21:59.050+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 763 } 2015-04-01T16:21:59.050+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 764 } 2015-04-01T16:21:59.050+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 765 } 2015-04-01T16:21:59.050+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 766 } 2015-04-01T16:21:59.050+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 767 } 2015-04-01T16:21:59.050+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 768 } 2015-04-01T16:21:59.050+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 769 } 2015-04-01T16:21:59.050+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 770 } 2015-04-01T16:21:59.051+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 771 } 2015-04-01T16:21:59.051+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 772 } 2015-04-01T16:21:59.051+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 773 } 2015-04-01T16:21:59.051+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 774 } 2015-04-01T16:21:59.051+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 775 } 2015-04-01T16:21:59.051+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 776 } 2015-04-01T16:21:59.051+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 777 } 2015-04-01T16:21:59.051+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 778 } 2015-04-01T16:21:59.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 779 } 2015-04-01T16:21:59.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 780 } 2015-04-01T16:21:59.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 781 } 2015-04-01T16:21:59.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 782 } 2015-04-01T16:21:59.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 783 } 2015-04-01T16:21:59.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 784 } 2015-04-01T16:21:59.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 785 } 2015-04-01T16:21:59.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 786 } 2015-04-01T16:21:59.052+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 787 } 2015-04-01T16:21:59.053+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 788 } 2015-04-01T16:21:59.053+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 789 } 2015-04-01T16:21:59.053+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 790 } 2015-04-01T16:21:59.053+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 791 } 2015-04-01T16:21:59.053+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 792 } 2015-04-01T16:21:59.053+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 793 } 2015-04-01T16:21:59.053+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 794 } 2015-04-01T16:21:59.053+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 795 } 2015-04-01T16:21:59.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 796 } 2015-04-01T16:21:59.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 797 } 2015-04-01T16:21:59.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 798 } 2015-04-01T16:21:59.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 799 } 2015-04-01T16:21:59.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 800 } 2015-04-01T16:21:59.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 801 } 2015-04-01T16:21:59.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 802 } 2015-04-01T16:21:59.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 803 } 2015-04-01T16:21:59.054+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 804 } 2015-04-01T16:21:59.055+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 805 } 2015-04-01T16:21:59.055+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 806 } 2015-04-01T16:21:59.055+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 807 } 2015-04-01T16:21:59.055+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 808 } 2015-04-01T16:21:59.055+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 809 } 2015-04-01T16:21:59.055+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 810 } 2015-04-01T16:21:59.055+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 811 } 2015-04-01T16:21:59.055+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 812 } 2015-04-01T16:21:59.055+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 813 } 2015-04-01T16:21:59.056+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 814 } 2015-04-01T16:21:59.056+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 815 } 2015-04-01T16:21:59.056+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 816 } 2015-04-01T16:21:59.056+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 817 } 2015-04-01T16:21:59.056+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 818 } 2015-04-01T16:21:59.056+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 819 } 2015-04-01T16:21:59.056+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 820 } 2015-04-01T16:21:59.056+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 821 } 2015-04-01T16:21:59.057+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 822 } 2015-04-01T16:21:59.057+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 823 } 2015-04-01T16:21:59.057+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 824 } 2015-04-01T16:21:59.057+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 825 } 2015-04-01T16:21:59.057+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 826 } 2015-04-01T16:21:59.057+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 827 } 2015-04-01T16:21:59.057+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 828 } 2015-04-01T16:21:59.057+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 829 } 2015-04-01T16:21:59.057+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 830 } 2015-04-01T16:21:59.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 831 } 2015-04-01T16:21:59.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 832 } 2015-04-01T16:21:59.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 833 } 2015-04-01T16:21:59.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 834 } 2015-04-01T16:21:59.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 835 } 2015-04-01T16:21:59.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 836 } 2015-04-01T16:21:59.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 837 } 2015-04-01T16:21:59.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 838 } 2015-04-01T16:21:59.058+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 839 } 2015-04-01T16:21:59.059+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 840 } 2015-04-01T16:21:59.059+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 841 } 2015-04-01T16:21:59.059+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 842 } 2015-04-01T16:21:59.059+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 843 } 2015-04-01T16:21:59.059+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 844 } 2015-04-01T16:21:59.059+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 845 } 2015-04-01T16:21:59.059+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 846 } 2015-04-01T16:21:59.059+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 847 } 2015-04-01T16:21:59.060+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 848 } 2015-04-01T16:21:59.060+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 849 } 2015-04-01T16:21:59.060+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 850 } 2015-04-01T16:21:59.060+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 851 } 2015-04-01T16:21:59.060+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 852 } 2015-04-01T16:21:59.060+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 853 } 2015-04-01T16:21:59.060+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 854 } 2015-04-01T16:21:59.060+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 855 } 2015-04-01T16:21:59.060+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 856 } 2015-04-01T16:21:59.061+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 857 } 2015-04-01T16:21:59.061+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 858 } 2015-04-01T16:21:59.061+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 859 } 2015-04-01T16:21:59.061+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 860 } 2015-04-01T16:21:59.061+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 861 } 2015-04-01T16:21:59.061+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 862 } 2015-04-01T16:21:59.061+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 863 } 2015-04-01T16:21:59.061+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 864 } 2015-04-01T16:21:59.062+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 865 } 2015-04-01T16:21:59.062+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 866 } 2015-04-01T16:21:59.062+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 867 } 2015-04-01T16:21:59.062+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 868 } 2015-04-01T16:21:59.062+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 869 } 2015-04-01T16:21:59.062+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 870 } 2015-04-01T16:21:59.062+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 871 } 2015-04-01T16:21:59.062+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 872 } 2015-04-01T16:21:59.062+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 873 } 2015-04-01T16:21:59.062+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 874 } 2015-04-01T16:21:59.063+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 875 } 2015-04-01T16:21:59.063+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 876 } 2015-04-01T16:21:59.063+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 877 } 2015-04-01T16:21:59.063+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 878 } 2015-04-01T16:21:59.063+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 879 } 2015-04-01T16:21:59.063+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 880 } 2015-04-01T16:21:59.063+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 881 } 2015-04-01T16:21:59.063+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 882 } 2015-04-01T16:21:59.064+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 883 } 2015-04-01T16:21:59.064+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 884 } 2015-04-01T16:21:59.064+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 885 } 2015-04-01T16:21:59.064+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 886 } 2015-04-01T16:21:59.064+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 887 } 2015-04-01T16:21:59.064+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 888 } 2015-04-01T16:21:59.064+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 889 } 2015-04-01T16:21:59.064+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 890 } 2015-04-01T16:21:59.064+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 891 } 2015-04-01T16:21:59.065+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 892 } 2015-04-01T16:21:59.065+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 893 } 2015-04-01T16:21:59.065+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 894 } 2015-04-01T16:21:59.065+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 895 } 2015-04-01T16:21:59.065+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 896 } 2015-04-01T16:21:59.065+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 897 } 2015-04-01T16:21:59.065+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 898 } 2015-04-01T16:21:59.065+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 899 } 2015-04-01T16:21:59.065+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 900 } 2015-04-01T16:21:59.066+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 901 } 2015-04-01T16:21:59.066+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 902 } 2015-04-01T16:21:59.066+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 903 } 2015-04-01T16:21:59.066+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 904 } 2015-04-01T16:21:59.066+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 905 } 2015-04-01T16:21:59.066+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 906 } 2015-04-01T16:21:59.066+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 907 } 2015-04-01T16:21:59.066+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 908 } 2015-04-01T16:21:59.066+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 909 } 2015-04-01T16:21:59.067+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 910 } 2015-04-01T16:21:59.067+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 911 } 2015-04-01T16:21:59.067+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 912 } 2015-04-01T16:21:59.067+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 913 } 2015-04-01T16:21:59.067+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 914 } 2015-04-01T16:21:59.067+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 915 } 2015-04-01T16:21:59.067+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 916 } 2015-04-01T16:21:59.067+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:21:59.067+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 917 } 2015-04-01T16:21:59.068+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 918 } 2015-04-01T16:21:59.068+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 919 } 2015-04-01T16:21:59.068+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 920 } 2015-04-01T16:21:59.068+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 921 } 2015-04-01T16:21:59.069+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 922 } 2015-04-01T16:21:59.069+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 923 } 2015-04-01T16:21:59.069+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 924 } 2015-04-01T16:21:59.069+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 925 } 2015-04-01T16:21:59.069+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 926 } 2015-04-01T16:21:59.069+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 927 } 2015-04-01T16:21:59.069+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 928 } 2015-04-01T16:21:59.069+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 929 } 2015-04-01T16:21:59.069+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 930 } 2015-04-01T16:21:59.070+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 931 } 2015-04-01T16:21:59.070+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 932 } 2015-04-01T16:21:59.070+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 933 } 2015-04-01T16:21:59.070+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 934 } 2015-04-01T16:21:59.070+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 935 } 2015-04-01T16:21:59.070+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 936 } 2015-04-01T16:21:59.070+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 937 } 2015-04-01T16:21:59.070+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 938 } 2015-04-01T16:21:59.071+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 939 } 2015-04-01T16:21:59.071+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 940 } 2015-04-01T16:21:59.071+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 941 } 2015-04-01T16:21:59.071+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 942 } 2015-04-01T16:21:59.071+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 943 } 2015-04-01T16:21:59.071+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 944 } 2015-04-01T16:21:59.071+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 945 } 2015-04-01T16:21:59.071+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 946 } 2015-04-01T16:21:59.072+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 947 } 2015-04-01T16:21:59.072+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 948 } 2015-04-01T16:21:59.072+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 949 } 2015-04-01T16:21:59.072+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 950 } 2015-04-01T16:21:59.072+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 951 } 2015-04-01T16:21:59.072+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 952 } 2015-04-01T16:21:59.072+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 953 } 2015-04-01T16:21:59.073+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 954 } 2015-04-01T16:21:59.073+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:21:59.073+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 955 } 2015-04-01T16:21:59.073+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 956 } 2015-04-01T16:21:59.073+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 957 } 2015-04-01T16:21:59.073+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:01.073Z 2015-04-01T16:21:59.073+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 958 } 2015-04-01T16:21:59.073+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 959 } 2015-04-01T16:21:59.073+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 960 } 2015-04-01T16:21:59.074+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 961 } 2015-04-01T16:21:59.074+0000 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:59.074+0000 D COMMAND [conn18] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:21:59.074+0000 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:21:59.074+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 962 } 2015-04-01T16:21:59.074+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 963 } 2015-04-01T16:21:59.074+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 964 } 2015-04-01T16:21:59.074+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 965 } 2015-04-01T16:21:59.075+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 966 } 2015-04-01T16:21:59.075+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 967 } 2015-04-01T16:21:59.075+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 968 } 2015-04-01T16:21:59.075+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 969 } 2015-04-01T16:21:59.075+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 970 } 2015-04-01T16:21:59.075+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 971 } 2015-04-01T16:21:59.075+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 972 } 2015-04-01T16:21:59.075+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 973 } 2015-04-01T16:21:59.076+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 974 } 2015-04-01T16:21:59.076+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 975 } 2015-04-01T16:21:59.076+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 976 } 2015-04-01T16:21:59.076+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 977 } 2015-04-01T16:21:59.076+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 978 } 2015-04-01T16:21:59.076+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 979 } 2015-04-01T16:21:59.076+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 980 } 2015-04-01T16:21:59.076+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 981 } 2015-04-01T16:21:59.077+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 982 } 2015-04-01T16:21:59.077+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 983 } 2015-04-01T16:21:59.077+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 984 } 2015-04-01T16:21:59.077+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 985 } 2015-04-01T16:21:59.077+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 986 } 2015-04-01T16:21:59.078+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 987 } 2015-04-01T16:21:59.078+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 988 } 2015-04-01T16:21:59.078+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 989 } 2015-04-01T16:21:59.078+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 990 } 2015-04-01T16:21:59.078+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 991 } 2015-04-01T16:21:59.078+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 992 } 2015-04-01T16:21:59.078+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 993 } 2015-04-01T16:21:59.078+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 994 } 2015-04-01T16:21:59.079+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 995 } 2015-04-01T16:21:59.079+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 996 } 2015-04-01T16:21:59.079+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 997 } 2015-04-01T16:21:59.079+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 998 } 2015-04-01T16:21:59.079+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 999 } 2015-04-01T16:21:59.079+0000 D QUERY [repl writer worker 15] Tests04011621.testcollection: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:59.079+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1000 } 2015-04-01T16:21:59.079+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1001 } 2015-04-01T16:21:59.079+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1002 } 2015-04-01T16:21:59.080+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1003 } 2015-04-01T16:21:59.080+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1004 } 2015-04-01T16:21:59.080+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1005 } 2015-04-01T16:21:59.080+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1006 } 2015-04-01T16:21:59.080+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1007 } 2015-04-01T16:21:59.080+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1008 } 2015-04-01T16:21:59.080+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1009 } 2015-04-01T16:21:59.080+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1010 } 2015-04-01T16:21:59.080+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1011 } 2015-04-01T16:21:59.081+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1012 } 2015-04-01T16:21:59.081+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1013 } 2015-04-01T16:21:59.081+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1014 } 2015-04-01T16:21:59.081+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1015 } 2015-04-01T16:21:59.081+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1016 } 2015-04-01T16:21:59.082+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1017 } 2015-04-01T16:21:59.082+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1018 } 2015-04-01T16:21:59.082+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1019 } 2015-04-01T16:21:59.082+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1020 } 2015-04-01T16:21:59.082+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1021 } 2015-04-01T16:21:59.082+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1022 } 2015-04-01T16:21:59.082+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1023 } 2015-04-01T16:21:59.082+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1024 } 2015-04-01T16:21:59.083+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1025 } 2015-04-01T16:21:59.083+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1026 } 2015-04-01T16:21:59.083+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1027 } 2015-04-01T16:21:59.083+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1028 } 2015-04-01T16:21:59.083+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1029 } 2015-04-01T16:21:59.083+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1030 } 2015-04-01T16:21:59.083+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1031 } 2015-04-01T16:21:59.083+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1032 } 2015-04-01T16:21:59.084+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1033 } 2015-04-01T16:21:59.084+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1034 } 2015-04-01T16:21:59.084+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1035 } 2015-04-01T16:21:59.090+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1036 } 2015-04-01T16:21:59.090+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1037 } 2015-04-01T16:21:59.090+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1038 } 2015-04-01T16:21:59.090+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1039 } 2015-04-01T16:21:59.090+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1040 } 2015-04-01T16:21:59.090+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1041 } 2015-04-01T16:21:59.090+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1042 } 2015-04-01T16:21:59.090+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1043 } 2015-04-01T16:21:59.090+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1044 } 2015-04-01T16:21:59.090+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1045 } 2015-04-01T16:21:59.090+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1046 } 2015-04-01T16:21:59.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1047 } 2015-04-01T16:21:59.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1048 } 2015-04-01T16:21:59.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1049 } 2015-04-01T16:21:59.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1050 } 2015-04-01T16:21:59.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1051 } 2015-04-01T16:21:59.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1052 } 2015-04-01T16:21:59.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1053 } 2015-04-01T16:21:59.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1054 } 2015-04-01T16:21:59.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1055 } 2015-04-01T16:21:59.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1056 } 2015-04-01T16:21:59.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1057 } 2015-04-01T16:21:59.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1058 } 2015-04-01T16:21:59.091+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1059 } 2015-04-01T16:21:59.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1060 } 2015-04-01T16:21:59.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1061 } 2015-04-01T16:21:59.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1062 } 2015-04-01T16:21:59.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1063 } 2015-04-01T16:21:59.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1064 } 2015-04-01T16:21:59.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1065 } 2015-04-01T16:21:59.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1066 } 2015-04-01T16:21:59.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1067 } 2015-04-01T16:21:59.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1068 } 2015-04-01T16:21:59.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1069 } 2015-04-01T16:21:59.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1070 } 2015-04-01T16:21:59.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1071 } 2015-04-01T16:21:59.092+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1072 } 2015-04-01T16:21:59.093+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1073 } 2015-04-01T16:21:59.093+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1074 } 2015-04-01T16:21:59.093+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1075 } 2015-04-01T16:21:59.093+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1076 } 2015-04-01T16:21:59.093+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1077 } 2015-04-01T16:21:59.094+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1078 } 2015-04-01T16:21:59.094+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1079 } 2015-04-01T16:21:59.094+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1080 } 2015-04-01T16:21:59.094+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1081 } 2015-04-01T16:21:59.094+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1082 } 2015-04-01T16:21:59.094+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1083 } 2015-04-01T16:21:59.095+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1084 } 2015-04-01T16:21:59.095+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1085 } 2015-04-01T16:21:59.095+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1086 } 2015-04-01T16:21:59.095+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1087 } 2015-04-01T16:21:59.095+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1088 } 2015-04-01T16:21:59.095+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1089 } 2015-04-01T16:21:59.095+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1090 } 2015-04-01T16:21:59.095+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1091 } 2015-04-01T16:21:59.096+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1092 } 2015-04-01T16:21:59.096+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1093 } 2015-04-01T16:21:59.096+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1094 } 2015-04-01T16:21:59.096+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1095 } 2015-04-01T16:21:59.096+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1096 } 2015-04-01T16:21:59.097+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1097 } 2015-04-01T16:21:59.097+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1098 } 2015-04-01T16:21:59.097+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1099 } 2015-04-01T16:21:59.098+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1100 } 2015-04-01T16:21:59.098+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1101 } 2015-04-01T16:21:59.098+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1102 } 2015-04-01T16:21:59.099+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1103 } 2015-04-01T16:21:59.099+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1104 } 2015-04-01T16:21:59.099+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1105 } 2015-04-01T16:21:59.099+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1106 } 2015-04-01T16:21:59.099+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1107 } 2015-04-01T16:21:59.099+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1108 } 2015-04-01T16:21:59.099+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1109 } 2015-04-01T16:21:59.100+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1110 } 2015-04-01T16:21:59.100+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1111 } 2015-04-01T16:21:59.100+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1112 } 2015-04-01T16:21:59.101+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1113 } 2015-04-01T16:21:59.101+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1114 } 2015-04-01T16:21:59.101+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1115 } 2015-04-01T16:21:59.101+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1116 } 2015-04-01T16:21:59.102+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1117 } 2015-04-01T16:21:59.102+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1118 } 2015-04-01T16:21:59.102+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1119 } 2015-04-01T16:21:59.102+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1120 } 2015-04-01T16:21:59.102+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1121 } 2015-04-01T16:21:59.102+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1122 } 2015-04-01T16:21:59.102+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1123 } 2015-04-01T16:21:59.102+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1124 } 2015-04-01T16:21:59.103+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1125 } 2015-04-01T16:21:59.103+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1126 } 2015-04-01T16:21:59.103+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1127 } 2015-04-01T16:21:59.103+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1128 } 2015-04-01T16:21:59.103+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1129 } 2015-04-01T16:21:59.104+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1130 } 2015-04-01T16:21:59.104+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1131 } 2015-04-01T16:21:59.104+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1132 } 2015-04-01T16:21:59.104+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1133 } 2015-04-01T16:21:59.105+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1134 } 2015-04-01T16:21:59.105+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1135 } 2015-04-01T16:21:59.105+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1136 } 2015-04-01T16:21:59.105+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1137 } 2015-04-01T16:21:59.105+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1138 } 2015-04-01T16:21:59.105+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1139 } 2015-04-01T16:21:59.106+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1140 } 2015-04-01T16:21:59.107+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1141 } 2015-04-01T16:21:59.107+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1142 } 2015-04-01T16:21:59.107+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1143 } 2015-04-01T16:21:59.107+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1144 } 2015-04-01T16:21:59.107+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1145 } 2015-04-01T16:21:59.107+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1146 } 2015-04-01T16:21:59.107+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1147 } 2015-04-01T16:21:59.108+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1148 } 2015-04-01T16:21:59.108+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1149 } 2015-04-01T16:21:59.108+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1150 } 2015-04-01T16:21:59.108+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1151 } 2015-04-01T16:21:59.108+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1152 } 2015-04-01T16:21:59.108+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1153 } 2015-04-01T16:21:59.108+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1154 } 2015-04-01T16:21:59.108+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1155 } 2015-04-01T16:21:59.109+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1156 } 2015-04-01T16:21:59.109+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1157 } 2015-04-01T16:21:59.109+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1158 } 2015-04-01T16:21:59.109+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1159 } 2015-04-01T16:21:59.109+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1160 } 2015-04-01T16:21:59.109+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1161 } 2015-04-01T16:21:59.109+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1162 } 2015-04-01T16:21:59.109+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1163 } 2015-04-01T16:21:59.110+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1164 } 2015-04-01T16:21:59.110+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1165 } 2015-04-01T16:21:59.110+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1166 } 2015-04-01T16:21:59.110+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1167 } 2015-04-01T16:21:59.110+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1168 } 2015-04-01T16:21:59.110+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1169 } 2015-04-01T16:21:59.111+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1170 } 2015-04-01T16:21:59.111+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1171 } 2015-04-01T16:21:59.111+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1172 } 2015-04-01T16:21:59.111+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1173 } 2015-04-01T16:21:59.111+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1174 } 2015-04-01T16:21:59.111+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1175 } 2015-04-01T16:21:59.113+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1176 } 2015-04-01T16:21:59.113+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1177 } 2015-04-01T16:21:59.113+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1178 } 2015-04-01T16:21:59.113+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1179 } 2015-04-01T16:21:59.113+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1180 } 2015-04-01T16:21:59.113+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1181 } 2015-04-01T16:21:59.114+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1182 } 2015-04-01T16:21:59.114+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1183 } 2015-04-01T16:21:59.114+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1184 } 2015-04-01T16:21:59.114+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1185 } 2015-04-01T16:21:59.114+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1186 } 2015-04-01T16:21:59.114+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1187 } 2015-04-01T16:21:59.114+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1188 } 2015-04-01T16:21:59.114+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1189 } 2015-04-01T16:21:59.115+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1190 } 2015-04-01T16:21:59.115+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1191 } 2015-04-01T16:21:59.115+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1192 } 2015-04-01T16:21:59.115+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1193 } 2015-04-01T16:21:59.115+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1194 } 2015-04-01T16:21:59.115+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1195 } 2015-04-01T16:21:59.115+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1196 } 2015-04-01T16:21:59.116+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1197 } 2015-04-01T16:21:59.116+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1198 } 2015-04-01T16:21:59.116+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1199 } 2015-04-01T16:21:59.116+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1200 } 2015-04-01T16:21:59.116+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1201 } 2015-04-01T16:21:59.116+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1202 } 2015-04-01T16:21:59.116+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1203 } 2015-04-01T16:21:59.116+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1204 } 2015-04-01T16:21:59.117+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1205 } 2015-04-01T16:21:59.117+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1206 } 2015-04-01T16:21:59.117+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1207 } 2015-04-01T16:21:59.117+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1208 } 2015-04-01T16:21:59.117+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1209 } 2015-04-01T16:21:59.117+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1210 } 2015-04-01T16:21:59.117+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1211 } 2015-04-01T16:21:59.118+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1212 } 2015-04-01T16:21:59.118+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1213 } 2015-04-01T16:21:59.118+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1214 } 2015-04-01T16:21:59.118+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1215 } 2015-04-01T16:21:59.118+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1216 } 2015-04-01T16:21:59.118+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1217 } 2015-04-01T16:21:59.118+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1218 } 2015-04-01T16:21:59.118+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1219 } 2015-04-01T16:21:59.119+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1220 } 2015-04-01T16:21:59.119+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1221 } 2015-04-01T16:21:59.119+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1222 } 2015-04-01T16:21:59.119+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1223 } 2015-04-01T16:21:59.119+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1224 } 2015-04-01T16:21:59.119+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1225 } 2015-04-01T16:21:59.119+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1226 } 2015-04-01T16:21:59.119+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1227 } 2015-04-01T16:21:59.120+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1228 } 2015-04-01T16:21:59.120+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1229 } 2015-04-01T16:21:59.120+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1230 } 2015-04-01T16:21:59.120+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1231 } 2015-04-01T16:21:59.121+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1232 } 2015-04-01T16:21:59.121+0000 D REPL [rsBackgroundSync] bgsync buffer has 1160 bytes 2015-04-01T16:21:59.121+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1233 } 2015-04-01T16:21:59.121+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1234 } 2015-04-01T16:21:59.121+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1235 } 2015-04-01T16:21:59.121+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1236 } 2015-04-01T16:21:59.122+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1237 } 2015-04-01T16:21:59.122+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1238 } 2015-04-01T16:21:59.122+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1239 } 2015-04-01T16:21:59.122+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1240 } 2015-04-01T16:21:59.122+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1241 } 2015-04-01T16:21:59.123+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1242 } 2015-04-01T16:21:59.123+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1243 } 2015-04-01T16:21:59.123+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1244 } 2015-04-01T16:21:59.123+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1245 } 2015-04-01T16:21:59.123+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1246 } 2015-04-01T16:21:59.123+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1247 } 2015-04-01T16:21:59.124+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1248 } 2015-04-01T16:21:59.124+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1249 } 2015-04-01T16:21:59.124+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1250 } 2015-04-01T16:21:59.124+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1251 } 2015-04-01T16:21:59.124+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1252 } 2015-04-01T16:21:59.124+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1253 } 2015-04-01T16:21:59.124+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1254 } 2015-04-01T16:21:59.124+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1255 } 2015-04-01T16:21:59.125+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1256 } 2015-04-01T16:21:59.125+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1257 } 2015-04-01T16:21:59.125+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1258 } 2015-04-01T16:21:59.125+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1259 } 2015-04-01T16:21:59.125+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1260 } 2015-04-01T16:21:59.125+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1261 } 2015-04-01T16:21:59.125+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1262 } 2015-04-01T16:21:59.125+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1263 } 2015-04-01T16:21:59.125+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1264 } 2015-04-01T16:21:59.126+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1265 } 2015-04-01T16:21:59.126+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1266 } 2015-04-01T16:21:59.126+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1267 } 2015-04-01T16:21:59.126+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1268 } 2015-04-01T16:21:59.126+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1269 } 2015-04-01T16:21:59.126+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1270 } 2015-04-01T16:21:59.126+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1271 } 2015-04-01T16:21:59.126+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1272 } 2015-04-01T16:21:59.127+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1273 } 2015-04-01T16:21:59.127+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1274 } 2015-04-01T16:21:59.127+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1275 } 2015-04-01T16:21:59.127+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1276 } 2015-04-01T16:21:59.127+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1277 } 2015-04-01T16:21:59.127+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1278 } 2015-04-01T16:21:59.127+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1279 } 2015-04-01T16:21:59.127+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1280 } 2015-04-01T16:21:59.128+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1281 } 2015-04-01T16:21:59.128+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1282 } 2015-04-01T16:21:59.128+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1283 } 2015-04-01T16:21:59.128+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1284 } 2015-04-01T16:21:59.128+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1285 } 2015-04-01T16:21:59.129+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1286 } 2015-04-01T16:21:59.129+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1287 } 2015-04-01T16:21:59.130+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1288 } 2015-04-01T16:21:59.130+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1289 } 2015-04-01T16:21:59.130+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1290 } 2015-04-01T16:21:59.130+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1291 } 2015-04-01T16:21:59.130+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1292 } 2015-04-01T16:21:59.130+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1293 } 2015-04-01T16:21:59.130+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1294 } 2015-04-01T16:21:59.131+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1295 } 2015-04-01T16:21:59.131+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1296 } 2015-04-01T16:21:59.131+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1297 } 2015-04-01T16:21:59.131+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1298 } 2015-04-01T16:21:59.131+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1299 } 2015-04-01T16:21:59.131+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1300 } 2015-04-01T16:21:59.131+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1301 } 2015-04-01T16:21:59.131+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1302 } 2015-04-01T16:21:59.132+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1303 } 2015-04-01T16:21:59.132+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1304 } 2015-04-01T16:21:59.132+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1305 } 2015-04-01T16:21:59.132+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1306 } 2015-04-01T16:21:59.132+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1307 } 2015-04-01T16:21:59.132+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1308 } 2015-04-01T16:21:59.132+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1309 } 2015-04-01T16:21:59.132+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1310 } 2015-04-01T16:21:59.133+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1311 } 2015-04-01T16:21:59.133+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1312 } 2015-04-01T16:21:59.133+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1313 } 2015-04-01T16:21:59.133+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1314 } 2015-04-01T16:21:59.133+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1315 } 2015-04-01T16:21:59.133+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1316 } 2015-04-01T16:21:59.133+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1317 } 2015-04-01T16:21:59.133+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1318 } 2015-04-01T16:21:59.134+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1319 } 2015-04-01T16:21:59.134+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1320 } 2015-04-01T16:21:59.134+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1321 } 2015-04-01T16:21:59.134+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1322 } 2015-04-01T16:21:59.134+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1323 } 2015-04-01T16:21:59.134+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1324 } 2015-04-01T16:21:59.134+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1325 } 2015-04-01T16:21:59.134+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1326 } 2015-04-01T16:21:59.135+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1327 } 2015-04-01T16:21:59.135+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1328 } 2015-04-01T16:21:59.135+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1329 } 2015-04-01T16:21:59.135+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1330 } 2015-04-01T16:21:59.135+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1331 } 2015-04-01T16:21:59.135+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1332 } 2015-04-01T16:21:59.135+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1333 } 2015-04-01T16:21:59.135+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1334 } 2015-04-01T16:21:59.136+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1335 } 2015-04-01T16:21:59.136+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1336 } 2015-04-01T16:21:59.136+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1337 } 2015-04-01T16:21:59.136+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1338 } 2015-04-01T16:21:59.136+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1339 } 2015-04-01T16:21:59.136+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1340 } 2015-04-01T16:21:59.136+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1341 } 2015-04-01T16:21:59.136+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1342 } 2015-04-01T16:21:59.137+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1343 } 2015-04-01T16:21:59.137+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1344 } 2015-04-01T16:21:59.137+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1345 } 2015-04-01T16:21:59.137+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1346 } 2015-04-01T16:21:59.137+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1347 } 2015-04-01T16:21:59.137+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1348 } 2015-04-01T16:21:59.137+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1349 } 2015-04-01T16:21:59.137+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1350 } 2015-04-01T16:21:59.138+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1351 } 2015-04-01T16:21:59.138+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1352 } 2015-04-01T16:21:59.138+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1353 } 2015-04-01T16:21:59.138+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1354 } 2015-04-01T16:21:59.139+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1355 } 2015-04-01T16:21:59.139+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1356 } 2015-04-01T16:21:59.139+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1357 } 2015-04-01T16:21:59.139+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1358 } 2015-04-01T16:21:59.139+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1359 } 2015-04-01T16:21:59.139+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1360 } 2015-04-01T16:21:59.139+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1361 } 2015-04-01T16:21:59.140+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1362 } 2015-04-01T16:21:59.140+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1363 } 2015-04-01T16:21:59.140+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1364 } 2015-04-01T16:21:59.140+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1365 } 2015-04-01T16:21:59.140+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1366 } 2015-04-01T16:21:59.140+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1367 } 2015-04-01T16:21:59.140+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1368 } 2015-04-01T16:21:59.140+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1369 } 2015-04-01T16:21:59.141+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1370 } 2015-04-01T16:21:59.141+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1371 } 2015-04-01T16:21:59.141+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1372 } 2015-04-01T16:21:59.141+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1373 } 2015-04-01T16:21:59.141+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1374 } 2015-04-01T16:21:59.141+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1375 } 2015-04-01T16:21:59.141+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1376 } 2015-04-01T16:21:59.141+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1377 } 2015-04-01T16:21:59.142+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1378 } 2015-04-01T16:21:59.142+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1379 } 2015-04-01T16:21:59.142+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1380 } 2015-04-01T16:21:59.142+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1381 } 2015-04-01T16:21:59.142+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1382 } 2015-04-01T16:21:59.143+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1383 } 2015-04-01T16:21:59.143+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1384 } 2015-04-01T16:21:59.143+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1385 } 2015-04-01T16:21:59.143+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1386 } 2015-04-01T16:21:59.143+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1387 } 2015-04-01T16:21:59.143+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1388 } 2015-04-01T16:21:59.143+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1389 } 2015-04-01T16:21:59.143+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1390 } 2015-04-01T16:21:59.144+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1391 } 2015-04-01T16:21:59.144+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1392 } 2015-04-01T16:21:59.144+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1393 } 2015-04-01T16:21:59.144+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1394 } 2015-04-01T16:21:59.144+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1395 } 2015-04-01T16:21:59.144+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1396 } 2015-04-01T16:21:59.144+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1397 } 2015-04-01T16:21:59.144+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1398 } 2015-04-01T16:21:59.145+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1399 } 2015-04-01T16:21:59.145+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1400 } 2015-04-01T16:21:59.145+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1401 } 2015-04-01T16:21:59.145+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1402 } 2015-04-01T16:21:59.145+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1403 } 2015-04-01T16:21:59.145+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1404 } 2015-04-01T16:21:59.145+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1405 } 2015-04-01T16:21:59.145+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1406 } 2015-04-01T16:21:59.146+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1407 } 2015-04-01T16:21:59.146+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1408 } 2015-04-01T16:21:59.146+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1409 } 2015-04-01T16:21:59.146+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1410 } 2015-04-01T16:21:59.146+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1411 } 2015-04-01T16:21:59.146+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1412 } 2015-04-01T16:21:59.146+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1413 } 2015-04-01T16:21:59.146+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1414 } 2015-04-01T16:21:59.146+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1415 } 2015-04-01T16:21:59.147+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1416 } 2015-04-01T16:21:59.147+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1417 } 2015-04-01T16:21:59.147+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1418 } 2015-04-01T16:21:59.147+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1419 } 2015-04-01T16:21:59.147+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1420 } 2015-04-01T16:21:59.147+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1421 } 2015-04-01T16:21:59.147+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1422 } 2015-04-01T16:21:59.147+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1423 } 2015-04-01T16:21:59.147+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1424 } 2015-04-01T16:21:59.148+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1425 } 2015-04-01T16:21:59.148+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1426 } 2015-04-01T16:21:59.148+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1427 } 2015-04-01T16:21:59.148+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1428 } 2015-04-01T16:21:59.148+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1429 } 2015-04-01T16:21:59.148+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1430 } 2015-04-01T16:21:59.149+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1431 } 2015-04-01T16:21:59.149+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1432 } 2015-04-01T16:21:59.149+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1433 } 2015-04-01T16:21:59.149+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1434 } 2015-04-01T16:21:59.149+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1435 } 2015-04-01T16:21:59.149+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1436 } 2015-04-01T16:21:59.149+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1437 } 2015-04-01T16:21:59.149+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1438 } 2015-04-01T16:21:59.150+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1439 } 2015-04-01T16:21:59.150+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1440 } 2015-04-01T16:21:59.150+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1441 } 2015-04-01T16:21:59.150+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1442 } 2015-04-01T16:21:59.150+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1443 } 2015-04-01T16:21:59.150+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1444 } 2015-04-01T16:21:59.150+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1445 } 2015-04-01T16:21:59.150+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1446 } 2015-04-01T16:21:59.151+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1447 } 2015-04-01T16:21:59.151+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1448 } 2015-04-01T16:21:59.151+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1449 } 2015-04-01T16:21:59.151+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1450 } 2015-04-01T16:21:59.151+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1451 } 2015-04-01T16:21:59.151+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1452 } 2015-04-01T16:21:59.151+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1453 } 2015-04-01T16:21:59.151+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1454 } 2015-04-01T16:21:59.151+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1455 } 2015-04-01T16:21:59.152+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1456 } 2015-04-01T16:21:59.152+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1457 } 2015-04-01T16:21:59.152+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1458 } 2015-04-01T16:21:59.153+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1459 } 2015-04-01T16:21:59.153+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1460 } 2015-04-01T16:21:59.153+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1461 } 2015-04-01T16:21:59.153+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1462 } 2015-04-01T16:21:59.153+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1463 } 2015-04-01T16:21:59.153+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1464 } 2015-04-01T16:21:59.153+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1465 } 2015-04-01T16:21:59.154+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1466 } 2015-04-01T16:21:59.154+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1467 } 2015-04-01T16:21:59.154+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1468 } 2015-04-01T16:21:59.154+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1469 } 2015-04-01T16:21:59.154+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1470 } 2015-04-01T16:21:59.154+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1471 } 2015-04-01T16:21:59.154+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1472 } 2015-04-01T16:21:59.155+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1473 } 2015-04-01T16:21:59.155+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1474 } 2015-04-01T16:21:59.155+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1475 } 2015-04-01T16:21:59.155+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1476 } 2015-04-01T16:21:59.155+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1477 } 2015-04-01T16:21:59.155+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1478 } 2015-04-01T16:21:59.155+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1479 } 2015-04-01T16:21:59.155+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1480 } 2015-04-01T16:21:59.156+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1481 } 2015-04-01T16:21:59.156+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1482 } 2015-04-01T16:21:59.156+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1483 } 2015-04-01T16:21:59.156+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1484 } 2015-04-01T16:21:59.156+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1485 } 2015-04-01T16:21:59.157+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1486 } 2015-04-01T16:21:59.157+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1487 } 2015-04-01T16:21:59.157+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1488 } 2015-04-01T16:21:59.157+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1489 } 2015-04-01T16:21:59.157+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1490 } 2015-04-01T16:21:59.157+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1491 } 2015-04-01T16:21:59.157+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1492 } 2015-04-01T16:21:59.157+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1493 } 2015-04-01T16:21:59.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1494 } 2015-04-01T16:21:59.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1495 } 2015-04-01T16:21:59.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1496 } 2015-04-01T16:21:59.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1497 } 2015-04-01T16:21:59.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1498 } 2015-04-01T16:21:59.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1499 } 2015-04-01T16:21:59.158+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1500 } 2015-04-01T16:21:59.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1501 } 2015-04-01T16:21:59.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1502 } 2015-04-01T16:21:59.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1503 } 2015-04-01T16:21:59.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1504 } 2015-04-01T16:21:59.159+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1505 } 2015-04-01T16:21:59.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1506 } 2015-04-01T16:21:59.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1507 } 2015-04-01T16:21:59.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1508 } 2015-04-01T16:21:59.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1509 } 2015-04-01T16:21:59.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1510 } 2015-04-01T16:21:59.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1511 } 2015-04-01T16:21:59.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1512 } 2015-04-01T16:21:59.160+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1513 } 2015-04-01T16:21:59.161+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1514 } 2015-04-01T16:21:59.161+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1515 } 2015-04-01T16:21:59.161+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1516 } 2015-04-01T16:21:59.161+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1517 } 2015-04-01T16:21:59.161+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1518 } 2015-04-01T16:21:59.161+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1519 } 2015-04-01T16:21:59.161+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1520 } 2015-04-01T16:21:59.162+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1521 } 2015-04-01T16:21:59.162+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1522 } 2015-04-01T16:21:59.162+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1523 } 2015-04-01T16:21:59.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1524 } 2015-04-01T16:21:59.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1525 } 2015-04-01T16:21:59.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1526 } 2015-04-01T16:21:59.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1527 } 2015-04-01T16:21:59.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1528 } 2015-04-01T16:21:59.163+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1529 } 2015-04-01T16:21:59.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1530 } 2015-04-01T16:21:59.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1531 } 2015-04-01T16:21:59.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1532 } 2015-04-01T16:21:59.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1533 } 2015-04-01T16:21:59.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1534 } 2015-04-01T16:21:59.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1535 } 2015-04-01T16:21:59.164+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1536 } 2015-04-01T16:21:59.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1537 } 2015-04-01T16:21:59.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1538 } 2015-04-01T16:21:59.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1539 } 2015-04-01T16:21:59.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1540 } 2015-04-01T16:21:59.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1541 } 2015-04-01T16:21:59.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1542 } 2015-04-01T16:21:59.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1543 } 2015-04-01T16:21:59.165+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1544 } 2015-04-01T16:21:59.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1545 } 2015-04-01T16:21:59.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1546 } 2015-04-01T16:21:59.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1547 } 2015-04-01T16:21:59.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1548 } 2015-04-01T16:21:59.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1549 } 2015-04-01T16:21:59.166+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1550 } 2015-04-01T16:21:59.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1551 } 2015-04-01T16:21:59.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1552 } 2015-04-01T16:21:59.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1553 } 2015-04-01T16:21:59.167+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1554 } 2015-04-01T16:21:59.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1555 } 2015-04-01T16:21:59.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1556 } 2015-04-01T16:21:59.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1557 } 2015-04-01T16:21:59.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1558 } 2015-04-01T16:21:59.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1559 } 2015-04-01T16:21:59.168+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1560 } 2015-04-01T16:21:59.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1561 } 2015-04-01T16:21:59.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1562 } 2015-04-01T16:21:59.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1563 } 2015-04-01T16:21:59.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1564 } 2015-04-01T16:21:59.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1565 } 2015-04-01T16:21:59.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1566 } 2015-04-01T16:21:59.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1567 } 2015-04-01T16:21:59.169+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1568 } 2015-04-01T16:21:59.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1569 } 2015-04-01T16:21:59.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1570 } 2015-04-01T16:21:59.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1571 } 2015-04-01T16:21:59.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1572 } 2015-04-01T16:21:59.170+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1573 } 2015-04-01T16:21:59.171+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1574 } 2015-04-01T16:21:59.171+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1575 } 2015-04-01T16:21:59.171+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1576 } 2015-04-01T16:21:59.171+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1577 } 2015-04-01T16:21:59.171+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1578 } 2015-04-01T16:21:59.171+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1579 } 2015-04-01T16:21:59.172+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1580 } 2015-04-01T16:21:59.172+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1581 } 2015-04-01T16:21:59.172+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1582 } 2015-04-01T16:21:59.172+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1583 } 2015-04-01T16:21:59.172+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1584 } 2015-04-01T16:21:59.172+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1585 } 2015-04-01T16:21:59.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1586 } 2015-04-01T16:21:59.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1587 } 2015-04-01T16:21:59.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1588 } 2015-04-01T16:21:59.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1589 } 2015-04-01T16:21:59.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1590 } 2015-04-01T16:21:59.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1591 } 2015-04-01T16:21:59.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1592 } 2015-04-01T16:21:59.173+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1593 } 2015-04-01T16:21:59.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1594 } 2015-04-01T16:21:59.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1595 } 2015-04-01T16:21:59.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1596 } 2015-04-01T16:21:59.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1597 } 2015-04-01T16:21:59.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1598 } 2015-04-01T16:21:59.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1599 } 2015-04-01T16:21:59.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1600 } 2015-04-01T16:21:59.174+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1601 } 2015-04-01T16:21:59.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1602 } 2015-04-01T16:21:59.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1603 } 2015-04-01T16:21:59.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1604 } 2015-04-01T16:21:59.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1605 } 2015-04-01T16:21:59.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1606 } 2015-04-01T16:21:59.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1607 } 2015-04-01T16:21:59.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1608 } 2015-04-01T16:21:59.175+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1609 } 2015-04-01T16:21:59.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1610 } 2015-04-01T16:21:59.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1611 } 2015-04-01T16:21:59.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1612 } 2015-04-01T16:21:59.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1613 } 2015-04-01T16:21:59.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1614 } 2015-04-01T16:21:59.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1615 } 2015-04-01T16:21:59.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1616 } 2015-04-01T16:21:59.176+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1617 } 2015-04-01T16:21:59.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1618 } 2015-04-01T16:21:59.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1619 } 2015-04-01T16:21:59.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1620 } 2015-04-01T16:21:59.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1621 } 2015-04-01T16:21:59.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1622 } 2015-04-01T16:21:59.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1623 } 2015-04-01T16:21:59.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1624 } 2015-04-01T16:21:59.177+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1625 } 2015-04-01T16:21:59.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1626 } 2015-04-01T16:21:59.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1627 } 2015-04-01T16:21:59.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1628 } 2015-04-01T16:21:59.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1629 } 2015-04-01T16:21:59.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1630 } 2015-04-01T16:21:59.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1631 } 2015-04-01T16:21:59.178+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1632 } 2015-04-01T16:21:59.179+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1633 } 2015-04-01T16:21:59.180+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1634 } 2015-04-01T16:21:59.180+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1635 } 2015-04-01T16:21:59.180+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1636 } 2015-04-01T16:21:59.180+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1637 } 2015-04-01T16:21:59.180+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1638 } 2015-04-01T16:21:59.180+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1639 } 2015-04-01T16:21:59.180+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1640 } 2015-04-01T16:21:59.181+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1641 } 2015-04-01T16:21:59.181+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1642 } 2015-04-01T16:21:59.181+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1643 } 2015-04-01T16:21:59.181+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1644 } 2015-04-01T16:21:59.181+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1645 } 2015-04-01T16:21:59.181+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1646 } 2015-04-01T16:21:59.181+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1647 } 2015-04-01T16:21:59.182+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1648 } 2015-04-01T16:21:59.182+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1649 } 2015-04-01T16:21:59.182+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1650 } 2015-04-01T16:21:59.182+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1651 } 2015-04-01T16:21:59.182+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1652 } 2015-04-01T16:21:59.182+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1653 } 2015-04-01T16:21:59.182+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1654 } 2015-04-01T16:21:59.182+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1655 } 2015-04-01T16:21:59.182+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1656 } 2015-04-01T16:21:59.183+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1657 } 2015-04-01T16:21:59.183+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1658 } 2015-04-01T16:21:59.183+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1659 } 2015-04-01T16:21:59.183+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1660 } 2015-04-01T16:21:59.183+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1661 } 2015-04-01T16:21:59.183+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1662 } 2015-04-01T16:21:59.183+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1663 } 2015-04-01T16:21:59.183+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1664 } 2015-04-01T16:21:59.184+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1665 } 2015-04-01T16:21:59.184+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1666 } 2015-04-01T16:21:59.184+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1667 } 2015-04-01T16:21:59.184+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1668 } 2015-04-01T16:21:59.184+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1669 } 2015-04-01T16:21:59.184+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1670 } 2015-04-01T16:21:59.184+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1671 } 2015-04-01T16:21:59.184+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1672 } 2015-04-01T16:21:59.185+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1673 } 2015-04-01T16:21:59.185+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1674 } 2015-04-01T16:21:59.185+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1675 } 2015-04-01T16:21:59.185+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1676 } 2015-04-01T16:21:59.185+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1677 } 2015-04-01T16:21:59.185+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1678 } 2015-04-01T16:21:59.185+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1679 } 2015-04-01T16:21:59.185+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1680 } 2015-04-01T16:21:59.186+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1681 } 2015-04-01T16:21:59.186+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1682 } 2015-04-01T16:21:59.186+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1683 } 2015-04-01T16:21:59.187+0000 D REPL [rsBackgroundSync] bgsync buffer has 3020 bytes 2015-04-01T16:21:59.187+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1684 } 2015-04-01T16:21:59.187+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1685 } 2015-04-01T16:21:59.187+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1686 } 2015-04-01T16:21:59.187+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1687 } 2015-04-01T16:21:59.187+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1688 } 2015-04-01T16:21:59.188+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1689 } 2015-04-01T16:21:59.188+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1690 } 2015-04-01T16:21:59.188+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1691 } 2015-04-01T16:21:59.188+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1692 } 2015-04-01T16:21:59.188+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1693 } 2015-04-01T16:21:59.188+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1694 } 2015-04-01T16:21:59.188+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1695 } 2015-04-01T16:21:59.188+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1696 } 2015-04-01T16:21:59.188+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1697 } 2015-04-01T16:21:59.189+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1698 } 2015-04-01T16:21:59.189+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1699 } 2015-04-01T16:21:59.189+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1700 } 2015-04-01T16:21:59.189+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1701 } 2015-04-01T16:21:59.191+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1702 } 2015-04-01T16:21:59.191+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1703 } 2015-04-01T16:21:59.191+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1704 } 2015-04-01T16:21:59.191+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1705 } 2015-04-01T16:21:59.191+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1706 } 2015-04-01T16:21:59.191+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1707 } 2015-04-01T16:21:59.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1708 } 2015-04-01T16:21:59.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1709 } 2015-04-01T16:21:59.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1710 } 2015-04-01T16:21:59.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1711 } 2015-04-01T16:21:59.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1712 } 2015-04-01T16:21:59.192+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1713 } 2015-04-01T16:21:59.193+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1714 } 2015-04-01T16:21:59.193+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1715 } 2015-04-01T16:21:59.193+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1716 } 2015-04-01T16:21:59.193+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1717 } 2015-04-01T16:21:59.193+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1718 } 2015-04-01T16:21:59.193+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1719 } 2015-04-01T16:21:59.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1720 } 2015-04-01T16:21:59.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1721 } 2015-04-01T16:21:59.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1722 } 2015-04-01T16:21:59.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1723 } 2015-04-01T16:21:59.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1724 } 2015-04-01T16:21:59.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1725 } 2015-04-01T16:21:59.194+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1726 } 2015-04-01T16:21:59.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1727 } 2015-04-01T16:21:59.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1728 } 2015-04-01T16:21:59.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1729 } 2015-04-01T16:21:59.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1730 } 2015-04-01T16:21:59.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1731 } 2015-04-01T16:21:59.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1732 } 2015-04-01T16:21:59.195+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1733 } 2015-04-01T16:21:59.196+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1734 } 2015-04-01T16:21:59.196+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1735 } 2015-04-01T16:21:59.196+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1736 } 2015-04-01T16:21:59.196+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1737 } 2015-04-01T16:21:59.196+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1738 } 2015-04-01T16:21:59.196+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1739 } 2015-04-01T16:21:59.197+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1740 } 2015-04-01T16:21:59.197+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1741 } 2015-04-01T16:21:59.197+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1742 } 2015-04-01T16:21:59.197+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1743 } 2015-04-01T16:21:59.198+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1744 } 2015-04-01T16:21:59.198+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1745 } 2015-04-01T16:21:59.198+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1746 } 2015-04-01T16:21:59.198+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1747 } 2015-04-01T16:21:59.198+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1748 } 2015-04-01T16:21:59.198+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1749 } 2015-04-01T16:21:59.198+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1750 } 2015-04-01T16:21:59.198+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1751 } 2015-04-01T16:21:59.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1752 } 2015-04-01T16:21:59.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1753 } 2015-04-01T16:21:59.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1754 } 2015-04-01T16:21:59.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1755 } 2015-04-01T16:21:59.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1756 } 2015-04-01T16:21:59.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1757 } 2015-04-01T16:21:59.199+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1758 } 2015-04-01T16:21:59.200+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1759 } 2015-04-01T16:21:59.200+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1760 } 2015-04-01T16:21:59.200+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1761 } 2015-04-01T16:21:59.201+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1762 } 2015-04-01T16:21:59.201+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1763 } 2015-04-01T16:21:59.201+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1764 } 2015-04-01T16:21:59.201+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1765 } 2015-04-01T16:21:59.201+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1766 } 2015-04-01T16:21:59.201+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1767 } 2015-04-01T16:21:59.201+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1768 } 2015-04-01T16:21:59.202+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1769 } 2015-04-01T16:21:59.202+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1770 } 2015-04-01T16:21:59.202+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1771 } 2015-04-01T16:21:59.202+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1772 } 2015-04-01T16:21:59.202+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1773 } 2015-04-01T16:21:59.202+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1774 } 2015-04-01T16:21:59.202+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1775 } 2015-04-01T16:21:59.202+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1776 } 2015-04-01T16:21:59.202+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1777 } 2015-04-01T16:21:59.203+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1778 } 2015-04-01T16:21:59.203+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1779 } 2015-04-01T16:21:59.203+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1780 } 2015-04-01T16:21:59.203+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1781 } 2015-04-01T16:21:59.204+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1782 } 2015-04-01T16:21:59.204+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1783 } 2015-04-01T16:21:59.204+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1784 } 2015-04-01T16:21:59.204+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1785 } 2015-04-01T16:21:59.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1786 } 2015-04-01T16:21:59.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1787 } 2015-04-01T16:21:59.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1788 } 2015-04-01T16:21:59.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1789 } 2015-04-01T16:21:59.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1790 } 2015-04-01T16:21:59.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1791 } 2015-04-01T16:21:59.205+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1792 } 2015-04-01T16:21:59.206+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1793 } 2015-04-01T16:21:59.206+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1794 } 2015-04-01T16:21:59.206+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1795 } 2015-04-01T16:21:59.206+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1796 } 2015-04-01T16:21:59.206+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1797 } 2015-04-01T16:21:59.207+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1798 } 2015-04-01T16:21:59.207+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1799 } 2015-04-01T16:21:59.207+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1800 } 2015-04-01T16:21:59.207+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1801 } 2015-04-01T16:21:59.208+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1802 } 2015-04-01T16:21:59.208+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1803 } 2015-04-01T16:21:59.208+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1804 } 2015-04-01T16:21:59.208+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1805 } 2015-04-01T16:21:59.208+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1806 } 2015-04-01T16:21:59.208+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1807 } 2015-04-01T16:21:59.208+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1808 } 2015-04-01T16:21:59.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1809 } 2015-04-01T16:21:59.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1810 } 2015-04-01T16:21:59.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1811 } 2015-04-01T16:21:59.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1812 } 2015-04-01T16:21:59.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1813 } 2015-04-01T16:21:59.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1814 } 2015-04-01T16:21:59.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1815 } 2015-04-01T16:21:59.209+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1816 } 2015-04-01T16:21:59.210+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1817 } 2015-04-01T16:21:59.210+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1818 } 2015-04-01T16:21:59.210+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1819 } 2015-04-01T16:21:59.210+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1820 } 2015-04-01T16:21:59.210+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1821 } 2015-04-01T16:21:59.210+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1822 } 2015-04-01T16:21:59.210+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1823 } 2015-04-01T16:21:59.211+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1824 } 2015-04-01T16:21:59.211+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1825 } 2015-04-01T16:21:59.211+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1826 } 2015-04-01T16:21:59.211+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1827 } 2015-04-01T16:21:59.212+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1828 } 2015-04-01T16:21:59.212+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1829 } 2015-04-01T16:21:59.212+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1830 } 2015-04-01T16:21:59.212+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1831 } 2015-04-01T16:21:59.212+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1832 } 2015-04-01T16:21:59.212+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1833 } 2015-04-01T16:21:59.212+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1834 } 2015-04-01T16:21:59.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1835 } 2015-04-01T16:21:59.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1836 } 2015-04-01T16:21:59.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1837 } 2015-04-01T16:21:59.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1838 } 2015-04-01T16:21:59.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1839 } 2015-04-01T16:21:59.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1840 } 2015-04-01T16:21:59.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1841 } 2015-04-01T16:21:59.213+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1842 } 2015-04-01T16:21:59.214+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1843 } 2015-04-01T16:21:59.214+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1844 } 2015-04-01T16:21:59.214+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1845 } 2015-04-01T16:21:59.214+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1846 } 2015-04-01T16:21:59.214+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1847 } 2015-04-01T16:21:59.215+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1848 } 2015-04-01T16:21:59.215+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1849 } 2015-04-01T16:21:59.215+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1850 } 2015-04-01T16:21:59.215+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1851 } 2015-04-01T16:21:59.215+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1852 } 2015-04-01T16:21:59.215+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1853 } 2015-04-01T16:21:59.215+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1854 } 2015-04-01T16:21:59.215+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1855 } 2015-04-01T16:21:59.216+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1856 } 2015-04-01T16:21:59.216+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1857 } 2015-04-01T16:21:59.216+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1858 } 2015-04-01T16:21:59.216+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1859 } 2015-04-01T16:21:59.216+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1860 } 2015-04-01T16:21:59.216+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1861 } 2015-04-01T16:21:59.216+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1862 } 2015-04-01T16:21:59.216+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1863 } 2015-04-01T16:21:59.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1864 } 2015-04-01T16:21:59.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1865 } 2015-04-01T16:21:59.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1866 } 2015-04-01T16:21:59.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1867 } 2015-04-01T16:21:59.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1868 } 2015-04-01T16:21:59.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1869 } 2015-04-01T16:21:59.217+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1870 } 2015-04-01T16:21:59.218+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1871 } 2015-04-01T16:21:59.218+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1872 } 2015-04-01T16:21:59.218+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1873 } 2015-04-01T16:21:59.218+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1874 } 2015-04-01T16:21:59.218+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1875 } 2015-04-01T16:21:59.218+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1876 } 2015-04-01T16:21:59.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1877 } 2015-04-01T16:21:59.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1878 } 2015-04-01T16:21:59.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1879 } 2015-04-01T16:21:59.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1880 } 2015-04-01T16:21:59.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1881 } 2015-04-01T16:21:59.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1882 } 2015-04-01T16:21:59.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1883 } 2015-04-01T16:21:59.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1884 } 2015-04-01T16:21:59.219+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1885 } 2015-04-01T16:21:59.220+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1886 } 2015-04-01T16:21:59.220+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1887 } 2015-04-01T16:21:59.220+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1888 } 2015-04-01T16:21:59.220+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1889 } 2015-04-01T16:21:59.221+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1890 } 2015-04-01T16:21:59.221+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1891 } 2015-04-01T16:21:59.221+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1892 } 2015-04-01T16:21:59.221+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1893 } 2015-04-01T16:21:59.222+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1894 } 2015-04-01T16:21:59.222+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1895 } 2015-04-01T16:21:59.222+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1896 } 2015-04-01T16:21:59.222+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1897 } 2015-04-01T16:21:59.222+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1898 } 2015-04-01T16:21:59.222+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1899 } 2015-04-01T16:21:59.223+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1900 } 2015-04-01T16:21:59.223+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1901 } 2015-04-01T16:21:59.223+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1902 } 2015-04-01T16:21:59.223+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1903 } 2015-04-01T16:21:59.223+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1904 } 2015-04-01T16:21:59.223+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1905 } 2015-04-01T16:21:59.223+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1906 } 2015-04-01T16:21:59.224+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1907 } 2015-04-01T16:21:59.224+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1908 } 2015-04-01T16:21:59.224+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1909 } 2015-04-01T16:21:59.224+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1910 } 2015-04-01T16:21:59.224+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1911 } 2015-04-01T16:21:59.224+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1912 } 2015-04-01T16:21:59.225+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1913 } 2015-04-01T16:21:59.225+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1914 } 2015-04-01T16:21:59.226+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1915 } 2015-04-01T16:21:59.226+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1916 } 2015-04-01T16:21:59.226+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1917 } 2015-04-01T16:21:59.226+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1918 } 2015-04-01T16:21:59.226+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1919 } 2015-04-01T16:21:59.226+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1920 } 2015-04-01T16:21:59.226+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1921 } 2015-04-01T16:21:59.226+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1922 } 2015-04-01T16:21:59.227+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1923 } 2015-04-01T16:21:59.227+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1924 } 2015-04-01T16:21:59.227+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1925 } 2015-04-01T16:21:59.227+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1926 } 2015-04-01T16:21:59.227+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1927 } 2015-04-01T16:21:59.227+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1928 } 2015-04-01T16:21:59.227+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1929 } 2015-04-01T16:21:59.228+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1930 } 2015-04-01T16:21:59.228+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1931 } 2015-04-01T16:21:59.228+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1932 } 2015-04-01T16:21:59.228+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1933 } 2015-04-01T16:21:59.228+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1934 } 2015-04-01T16:21:59.228+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1935 } 2015-04-01T16:21:59.229+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1936 } 2015-04-01T16:21:59.229+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1937 } 2015-04-01T16:21:59.229+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1938 } 2015-04-01T16:21:59.230+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1939 } 2015-04-01T16:21:59.230+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1940 } 2015-04-01T16:21:59.230+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1941 } 2015-04-01T16:21:59.230+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1942 } 2015-04-01T16:21:59.230+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1943 } 2015-04-01T16:21:59.230+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1944 } 2015-04-01T16:21:59.230+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1945 } 2015-04-01T16:21:59.230+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1946 } 2015-04-01T16:21:59.231+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1947 } 2015-04-01T16:21:59.231+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1948 } 2015-04-01T16:21:59.231+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1949 } 2015-04-01T16:21:59.231+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1950 } 2015-04-01T16:21:59.231+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1951 } 2015-04-01T16:21:59.231+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1952 } 2015-04-01T16:21:59.231+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1953 } 2015-04-01T16:21:59.231+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1954 } 2015-04-01T16:21:59.231+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1955 } 2015-04-01T16:21:59.232+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1956 } 2015-04-01T16:21:59.232+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1957 } 2015-04-01T16:21:59.232+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1958 } 2015-04-01T16:21:59.232+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1959 } 2015-04-01T16:21:59.232+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1960 } 2015-04-01T16:21:59.232+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1961 } 2015-04-01T16:21:59.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1962 } 2015-04-01T16:21:59.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1963 } 2015-04-01T16:21:59.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1964 } 2015-04-01T16:21:59.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1965 } 2015-04-01T16:21:59.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1966 } 2015-04-01T16:21:59.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1967 } 2015-04-01T16:21:59.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1968 } 2015-04-01T16:21:59.233+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1969 } 2015-04-01T16:21:59.234+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1970 } 2015-04-01T16:21:59.234+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1971 } 2015-04-01T16:21:59.234+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1972 } 2015-04-01T16:21:59.234+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1973 } 2015-04-01T16:21:59.234+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1974 } 2015-04-01T16:21:59.234+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1975 } 2015-04-01T16:21:59.235+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1976 } 2015-04-01T16:21:59.235+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1977 } 2015-04-01T16:21:59.235+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1978 } 2015-04-01T16:21:59.236+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1979 } 2015-04-01T16:21:59.236+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1980 } 2015-04-01T16:21:59.236+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1981 } 2015-04-01T16:21:59.236+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1982 } 2015-04-01T16:21:59.236+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1983 } 2015-04-01T16:21:59.236+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1984 } 2015-04-01T16:21:59.236+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1985 } 2015-04-01T16:21:59.237+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1986 } 2015-04-01T16:21:59.237+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1987 } 2015-04-01T16:21:59.237+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1988 } 2015-04-01T16:21:59.237+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1989 } 2015-04-01T16:21:59.237+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1990 } 2015-04-01T16:21:59.237+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1991 } 2015-04-01T16:21:59.237+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1992 } 2015-04-01T16:21:59.237+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1993 } 2015-04-01T16:21:59.237+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1994 } 2015-04-01T16:21:59.238+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1995 } 2015-04-01T16:21:59.238+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1996 } 2015-04-01T16:21:59.238+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1997 } 2015-04-01T16:21:59.238+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1998 } 2015-04-01T16:21:59.238+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1999 } 2015-04-01T16:21:59.238+0000 D QUERY [repl writer worker 15] Tests04011621.testcollection: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:59.238+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b26e15b5605d452cbac') } 2015-04-01T16:21:59.239+0000 D QUERY [rsSync] local.oplog.rs: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:59.243+0000 D QUERY [rsSync] local.oplog.rs: clearing collection plan cache - 1000 write operations detected since last refresh. 2015-04-01T16:21:59.245+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.245+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905318000|2577, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.247+0000 D REPL [rsBackgroundSync] bgsync buffer has 4632 bytes 2015-04-01T16:21:59.247+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.247+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbad') } 2015-04-01T16:21:59.248+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|1, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.248+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.252+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.252+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:59.252+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:59.252+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|2, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.252+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.253+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.253+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.254+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 0 eloc: 3:164d9000 2015-04-01T16:21:59.254+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.254+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:59.254+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.254+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:59.254+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:59.254+0000 I INDEX [repl writer worker 15] build index done. scanned 2 total records. 0 secs 2015-04-01T16:21:59.254+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.254+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.254+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|3, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.254+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.255+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.255+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:59.255+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:59.255+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:59.255+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.255+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.256+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.256+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.257+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:59.257+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|4, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.257+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.258+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.258+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:59.258+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:59.259+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:59.259+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.259+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.259+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 3:164b9000 2015-04-01T16:21:59.259+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.259+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|5, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.260+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.261+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:59.261+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbae') } 2015-04-01T16:21:59.261+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbae') } 2015-04-01T16:21:59.261+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|7, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.262+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.263+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.264+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:59.264+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:59.264+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:59.264+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.264+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.264+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:59.264+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|8, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.265+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.266+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.267+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:59.267+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:59.268+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:59.268+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.268+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.268+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 3:164b9000 2015-04-01T16:21:59.268+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.268+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|9, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.271+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.271+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:59.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbaf') } 2015-04-01T16:21:59.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbaf') } 2015-04-01T16:21:59.271+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbb0') } 2015-04-01T16:21:59.272+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|12, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.272+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.272+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.273+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:59.273+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:59.273+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|13, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.274+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.275+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:59.275+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbb0') } 2015-04-01T16:21:59.275+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbb1') } 2015-04-01T16:21:59.275+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|15, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.276+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.277+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.277+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.277+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 3:164d9000 2015-04-01T16:21:59.278+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.278+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:59.278+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.278+0000 D INDEX [repl writer worker 15] bulk commit starting for index: x_1 2015-04-01T16:21:59.278+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:59.278+0000 I INDEX [repl writer worker 15] build index done. scanned 1 total records. 0 secs 2015-04-01T16:21:59.278+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.278+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.279+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|16, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.279+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.280+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.281+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { dropIndexes: "testcollection", index: "*" } 2015-04-01T16:21:59.281+0000 I COMMAND [repl writer worker 15] CMD: dropIndexes Tests04011621.testcollection 2015-04-01T16:21:59.281+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { x: 1 }, name: "x_1", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.281+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.282+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|17, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.282+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.283+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:59.283+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbb1') } 2015-04-01T16:21:59.283+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbb2') } 2015-04-01T16:21:59.283+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|19, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.283+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.284+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.284+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.284+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 3:164d9000 2015-04-01T16:21:59.284+0000 I INDEX [repl writer worker 15] build index on: Tests04011621.testcollection properties: { v: 1, key: { x: 1 }, name: "xIndex", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.284+0000 I INDEX [repl writer worker 15] building index using bulk method 2015-04-01T16:21:59.284+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.284+0000 D INDEX [repl writer worker 15] bulk commit starting for index: xIndex 2015-04-01T16:21:59.284+0000 D INDEX [repl writer worker 15] done building bottom layer, going to commit 2015-04-01T16:21:59.285+0000 I INDEX [repl writer worker 15] build index done. scanned 1 total records. 0 secs 2015-04-01T16:21:59.285+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.285+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.285+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|20, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.285+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.287+0000 D REPL [rsSync] replication batch size is 6 2015-04-01T16:21:59.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbb2') } 2015-04-01T16:21:59.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbb3') } 2015-04-01T16:21:59.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbb4') } 2015-04-01T16:21:59.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbb5') } 2015-04-01T16:21:59.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbb6') } 2015-04-01T16:21:59.287+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbb7') } 2015-04-01T16:21:59.288+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.288+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|26, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.289+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.290+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:59.290+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:59.290+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:59.290+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.290+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.290+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { x: 1 }, name: "xIndex", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.290+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.290+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:59.291+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|27, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.291+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.292+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.292+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:59.293+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:59.293+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:59.293+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.293+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.293+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 3:164d9000 2015-04-01T16:21:59.293+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.293+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|28, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.293+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.294+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:59.294+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbb9') } 2015-04-01T16:21:59.294+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbb9') } 2015-04-01T16:21:59.294+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|30, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.295+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.296+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.296+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:59.296+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:59.296+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:59.296+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.297+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.297+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:59.297+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|31, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.297+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.298+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.298+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:59.298+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:59.298+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:59.298+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.298+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.298+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 3:164d9000 2015-04-01T16:21:59.298+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.298+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|32, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.299+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.299+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:59.299+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbba') } 2015-04-01T16:21:59.299+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbbb') } 2015-04-01T16:21:59.300+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbba') } 2015-04-01T16:21:59.300+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|35, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.300+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.300+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.300+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:59.300+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:59.301+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:59.301+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.301+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.301+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:59.301+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|36, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.301+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.302+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.302+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:59.302+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:59.303+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:59.303+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.303+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.303+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 3:164d9000 2015-04-01T16:21:59.303+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.303+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|37, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.304+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.304+0000 D REPL [rsSync] replication batch size is 3 2015-04-01T16:21:59.304+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbbc') } 2015-04-01T16:21:59.305+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbbd') } 2015-04-01T16:21:59.305+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbbc') } 2015-04-01T16:21:59.305+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.305+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|40, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.306+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.306+0000 D REPL [rsBackgroundSync] bgsync buffer has 1554 bytes 2015-04-01T16:21:59.307+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:59.307+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:59.307+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:59.307+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.307+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.307+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:59.307+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|41, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.307+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.310+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.310+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:59.310+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:59.310+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:59.310+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.310+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.310+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 3:164d9000 2015-04-01T16:21:59.311+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.311+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|42, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.311+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.312+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:59.312+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbbe') } 2015-04-01T16:21:59.312+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbbe') } 2015-04-01T16:21:59.312+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|44, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.312+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.313+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.313+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "duplicatekeys" } 2015-04-01T16:21:59.313+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.duplicatekeys 2015-04-01T16:21:59.313+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.duplicatekeys 2015-04-01T16:21:59.313+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.duplicatekeys" } 2015-04-01T16:21:59.313+0000 D STORAGE [repl writer worker 15] Tests04011621.duplicatekeys: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.313+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:59.314+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|45, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.314+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.314+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.314+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "duplicatekeys" } 2015-04-01T16:21:59.314+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.duplicatekeys {} 2015-04-01T16:21:59.315+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 3:16492000 2015-04-01T16:21:59.315+0000 D STORAGE [repl writer worker 15] Tests04011621.duplicatekeys: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.315+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.315+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 0:ef000 2015-04-01T16:21:59.315+0000 D STORAGE [repl writer worker 15] Tests04011621.duplicatekeys: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.315+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|46, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.315+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.316+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.316+0000 D QUERY [repl writer worker 15] Using idhack: { _id: 1 } 2015-04-01T16:21:59.316+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|47, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.316+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.317+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.317+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:59.317+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:59.317+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:59.317+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.317+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.317+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:59.318+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|48, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.318+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.318+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.318+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:59.318+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:59.319+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:59.319+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.319+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.319+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 3:164d9000 2015-04-01T16:21:59.319+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.319+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|49, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.319+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.320+0000 D REPL [rsSync] replication batch size is 2 2015-04-01T16:21:59.320+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbbf') } 2015-04-01T16:21:59.320+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbbf') } 2015-04-01T16:21:59.320+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|51, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.320+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.321+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.321+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:59.321+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:59.321+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:59.321+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.321+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.322+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:59.322+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|52, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.322+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.323+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.323+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:59.323+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:59.323+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:59.323+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.323+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.323+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 3:164d9000 2015-04-01T16:21:59.323+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.323+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|53, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.323+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.324+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.324+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbc0') } 2015-04-01T16:21:59.324+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|54, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.324+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.325+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.325+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:59.325+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:59.325+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:59.325+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.325+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.326+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:59.326+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|55, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.326+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.326+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.327+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:59.327+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:59.327+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:59.327+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.327+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.327+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 3:164d9000 2015-04-01T16:21:59.327+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.327+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|56, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.327+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.328+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.328+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbc1') } 2015-04-01T16:21:59.328+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|57, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.342+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.343+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.343+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:59.343+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:59.343+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:59.343+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.343+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.343+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:59.344+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|58, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.346+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.346+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.346+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:59.346+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:59.347+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:59.347+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.347+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.347+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 3:164d9000 2015-04-01T16:21:59.348+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.348+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|59, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.349+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.349+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.350+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbc2') } 2015-04-01T16:21:59.350+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|60, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.352+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.352+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.353+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:59.353+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:59.353+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:59.353+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.353+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.353+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:59.353+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|61, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.355+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.357+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.357+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:59.357+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:59.357+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:59.357+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.357+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.357+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 3:164d9000 2015-04-01T16:21:59.357+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.358+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|62, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.358+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.359+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.359+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbc3') } 2015-04-01T16:21:59.359+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|63, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.363+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.363+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.363+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { drop: "testcollection" } 2015-04-01T16:21:59.363+0000 I COMMAND [repl writer worker 15] CMD: drop Tests04011621.testcollection 2015-04-01T16:21:59.363+0000 D STORAGE [repl writer worker 15] dropCollection: Tests04011621.testcollection 2015-04-01T16:21:59.364+0000 D INDEX [repl writer worker 15] dropAllIndexes dropping: { v: 1, key: { _id: 1 }, name: "_id_", ns: "Tests04011621.testcollection" } 2015-04-01T16:21:59.364+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.364+0000 D STORAGE [repl writer worker 15] dropIndexes done 2015-04-01T16:21:59.364+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|64, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.366+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.367+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.368+0000 D COMMAND [repl writer worker 15] run command Tests04011621.$cmd { create: "testcollection" } 2015-04-01T16:21:59.368+0000 D STORAGE [repl writer worker 15] create collection Tests04011621.testcollection {} 2015-04-01T16:21:59.368+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:8192 fromFreeList: 1 eloc: 0:6e5000 2015-04-01T16:21:59.368+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.368+0000 D STORAGE [repl writer worker 15] allocating new extent 2015-04-01T16:21:59.368+0000 D STORAGE [repl writer worker 15] MmapV1ExtentManager::allocateExtent desiredSize:131072 fromFreeList: 1 eloc: 3:164d9000 2015-04-01T16:21:59.369+0000 D STORAGE [repl writer worker 15] Tests04011621.testcollection: clearing plan cache - collection info cache reset 2015-04-01T16:21:59.369+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|65, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:21:59.370+0000 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:21:59.370+0000 D REPL [rsSync] replication batch size is 1 2015-04-01T16:21:59.370+0000 D QUERY [repl writer worker 15] Using idhack: { _id: ObjectId('551c1b27e15b5605d452cbc4') } 2015-04-01T16:21:59.371+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|66, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:22:00.555+0000 D COMMAND [conn16] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:00.555+0000 I COMMAND [conn16] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:22:00.556+0000 D COMMAND [conn16] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:00.556+0000 I COMMAND [conn16] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:00.753+0000 D COMMAND [conn19] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:00.753+0000 D COMMAND [conn19] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:00.753+0000 I COMMAND [conn19] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:00.888+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:00.888+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:00.888+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:02.888Z 2015-04-01T16:22:01.073+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:01.073+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:22:01.073+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:01.073+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:03.073Z 2015-04-01T16:22:01.074+0000 D COMMAND [conn18] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:01.074+0000 D COMMAND [conn18] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:01.074+0000 I COMMAND [conn18] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:02.753+0000 D COMMAND [conn19] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:02.753+0000 D COMMAND [conn19] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:02.753+0000 I COMMAND [conn19] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:02.888+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:02.888+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:02.888+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:04.888Z 2015-04-01T16:22:03.073+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:03.073+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:03.073+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:05.073Z 2015-04-01T16:22:03.074+0000 D NETWORK [conn18] SocketException: remote: 127.0.0.1:62992 error: 9001 socket exception [CLOSED] server [127.0.0.1:62992] 2015-04-01T16:22:03.074+0000 I NETWORK [conn18] end connection 127.0.0.1:62992 (3 connections now open) 2015-04-01T16:22:03.075+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:63005 #20 (4 connections now open) 2015-04-01T16:22:03.077+0000 D QUERY [conn20] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:22:03.077+0000 D COMMAND [conn20] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D54346C484C337A31515A2F43433966417841696B6E4A52786E4851653848546D) } 2015-04-01T16:22:03.077+0000 I COMMAND [conn20] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D54346C484C337A31515A2F43433966417841696B6E4A52786E4851653848546D) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:22:03.094+0000 D COMMAND [conn20] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D54346C484C337A31515A2F43433966417841696B6E4A52786E4851653848546D47564D634D445551706C6C63424D78765839704C544D43507A4863637A...), conversationId: 1 } 2015-04-01T16:22:03.094+0000 I COMMAND [conn20] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D54346C484C337A31515A2F43433966417841696B6E4A52786E4851653848546D47564D634D445551706C6C63424D78765839704C544D43507A4863637A...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:22:03.095+0000 D COMMAND [conn20] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:22:03.095+0000 I ACCESS [conn20] Successfully authenticated as principal __system on local 2015-04-01T16:22:03.095+0000 I COMMAND [conn20] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:22:03.095+0000 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:03.095+0000 D COMMAND [conn20] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:03.095+0000 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:04.396+0000 D COMMAND [conn17] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:04.397+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:22:04.397+0000 D COMMAND [conn17] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:04.397+0000 I COMMAND [conn17] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:04.753+0000 D COMMAND [conn19] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:06.010+0000 D COMMAND [conn19] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:04.888+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:06.010+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:06.011+0000 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:06.011+0000 D COMMAND [conn20] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:06.011+0000 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:06.012+0000 I COMMAND [conn19] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 1ms 2015-04-01T16:22:06.012+0000 D NETWORK [conn19] SocketException: remote: 127.0.0.1:62995 error: 9001 socket exception [CLOSED] server [127.0.0.1:62995] 2015-04-01T16:22:06.012+0000 I NETWORK [conn19] end connection 127.0.0.1:62995 (3 connections now open) 2015-04-01T16:22:06.012+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:22:06.013+0000 D NETWORK [ReplExecNetThread-2] connected to server localhost:27019 (127.0.0.1) 2015-04-01T16:22:06.014+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:22:06.017+0000 W NETWORK [ReplExecNetThread-2] The server certificate does not match the host name localhost 2015-04-01T16:22:06.049+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:06.049+0000 D NETWORK [ReplExecNetThread-0] connected to server localhost:27017 (127.0.0.1) 2015-04-01T16:22:06.049+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:08.049Z 2015-04-01T16:22:06.053+0000 W NETWORK [ReplExecNetThread-0] The server certificate does not match the host name localhost 2015-04-01T16:22:08.012+0000 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:08.012+0000 D COMMAND [conn20] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:08.012+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:63010 #21 (4 connections now open) 2015-04-01T16:22:08.012+0000 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:08.015+0000 D QUERY [conn21] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:22:08.015+0000 D COMMAND [conn21] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D38726449576B397075334C50444E7249356162574E38717635336D3045654E31) } 2015-04-01T16:22:08.015+0000 I COMMAND [conn21] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D38726449576B397075334C50444E7249356162574E38717635336D3045654E31) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:22:08.036+0000 D COMMAND [conn21] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D38726449576B397075334C50444E7249356162574E38717635336D3045654E31435449475436656B696C6859335A4A4D545A54766A6770503730615866...), conversationId: 1 } 2015-04-01T16:22:08.036+0000 I COMMAND [conn21] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D38726449576B397075334C50444E7249356162574E38717635336D3045654E31435449475436656B696C6859335A4A4D545A54766A6770503730615866...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:22:08.036+0000 D COMMAND [conn21] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:22:08.036+0000 I ACCESS [conn21] Successfully authenticated as principal __system on local 2015-04-01T16:22:08.036+0000 I COMMAND [conn21] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:22:08.037+0000 D COMMAND [conn21] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:08.037+0000 D COMMAND [conn21] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:08.037+0000 I COMMAND [conn21] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:08.049+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:08.049+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:08.049+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:10.049Z 2015-04-01T16:22:10.012+0000 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:10.012+0000 D COMMAND [conn20] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:10.012+0000 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:10.037+0000 D COMMAND [conn21] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:10.037+0000 D COMMAND [conn21] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:10.037+0000 I COMMAND [conn21] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:10.049+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:10.049+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:10.049+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:12.049Z 2015-04-01T16:22:10.555+0000 D COMMAND [conn16] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:10.555+0000 I COMMAND [conn16] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:22:10.556+0000 D COMMAND [conn16] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:10.556+0000 I COMMAND [conn16] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:12.012+0000 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:12.012+0000 D COMMAND [conn20] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:12.012+0000 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:12.037+0000 D COMMAND [conn21] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:12.037+0000 D COMMAND [conn21] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:12.037+0000 I COMMAND [conn21] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:12.049+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:12.049+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:22:12.049+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:12.049+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:14.049Z 2015-04-01T16:22:14.013+0000 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:14.013+0000 D COMMAND [conn20] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:14.013+0000 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:14.038+0000 D COMMAND [conn21] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:14.038+0000 D COMMAND [conn21] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:14.038+0000 I COMMAND [conn21] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:14.049+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:14.049+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:14.049+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:16.049Z 2015-04-01T16:22:14.395+0000 D COMMAND [conn17] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:14.396+0000 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:22:14.397+0000 D COMMAND [conn17] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:14.397+0000 I COMMAND [conn17] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:16.013+0000 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:16.013+0000 D COMMAND [conn20] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:16.013+0000 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:16.038+0000 D COMMAND [conn21] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:16.038+0000 D COMMAND [conn21] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:16.038+0000 I COMMAND [conn21] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:16.049+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:16.049+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:16.049+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:18.049Z 2015-04-01T16:22:16.553+0000 I NETWORK [ReplExecNetThread-0] Socket recv() timeout 127.0.0.1:27017 2015-04-01T16:22:16.553+0000 I NETWORK [ReplExecNetThread-0] SocketException: remote: 127.0.0.1:27017 error: 9001 socket exception [RECV_TIMEOUT] server [127.0.0.1:27017] 2015-04-01T16:22:16.553+0000 I NETWORK [ReplExecNetThread-0] DBClientCursor::init call() failed 2015-04-01T16:22:16.553+0000 D - [ReplExecNetThread-0] User Assertion: 10276:DBClientBase::findN: transport error: localhost:27017 ns: local.$cmd query: { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D534D4E376A716762326245447A315847506C4D2B50695A6E2B6E627338315834) } 2015-04-01T16:22:16.740+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was Location10276 DBClientBase::findN: transport error: localhost:27017 ns: local.$cmd query: { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D534D4E376A716762326245447A315847506C4D2B50695A6E2B6E627338315834) } 2015-04-01T16:22:16.740+0000 I REPL [ReplicationExecutor] Error in heartbeat request to localhost:27017; Location10276 DBClientBase::findN: transport error: localhost:27017 ns: local.$cmd query: { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D534D4E376A716762326245447A315847506C4D2B50695A6E2B6E627338315834) } 2015-04-01T16:22:16.740+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:18.740Z 2015-04-01T16:22:16.740+0000 I REPL [ReplicationExecutor] Standing for election 2015-04-01T16:22:16.740+0000 D REPL [ReplicationExecutor] Scheduling replSetFresh to localhost:27019 2015-04-01T16:22:16.740+0000 D COMMAND [conn21] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806646889447490), who: "localhost:27019", cfgver: 1, id: 2 } 2015-04-01T16:22:16.740+0000 D COMMAND [conn21] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806646889447490), who: "localhost:27019", cfgver: 1, id: 2 } 2015-04-01T16:22:16.740+0000 I COMMAND [conn21] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806646889447490), who: "localhost:27019", cfgver: 1, id: 2 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:171 locks:{} 0ms 2015-04-01T16:22:16.741+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetFresh to localhost:27019 was OK 2015-04-01T16:22:16.741+0000 D REPL [ReplicationExecutor] FreshnessChecker: Got response from localhost:27019 of { opTime: new Date(6132806646889447490), fresher: false, veto: false, ok: 1.0 } 2015-04-01T16:22:16.741+0000 I REPL [ReplicationExecutor] replSet possible election tie; sleeping 467ms until 2015-04-01T16:22:17.208+0000 2015-04-01T16:22:17.208+0000 I REPL [ReplicationExecutor] Standing for election 2015-04-01T16:22:17.208+0000 D REPL [ReplicationExecutor] Scheduling replSetFresh to localhost:27019 2015-04-01T16:22:17.208+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:22:17.208+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetFresh to localhost:27019 was OK 2015-04-01T16:22:17.208+0000 D REPL [ReplicationExecutor] FreshnessChecker: Got response from localhost:27019 of { opTime: new Date(6132806646889447490), fresher: false, veto: false, ok: 1.0 } 2015-04-01T16:22:17.208+0000 I REPL [ReplicationExecutor] replSet info electSelf 2015-04-01T16:22:17.208+0000 D REPL [ReplicationExecutor] Scheduling replSetElect to localhost:27019 2015-04-01T16:22:17.209+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetElect to localhost:27019 was OK 2015-04-01T16:22:17.209+0000 D REPL [ReplicationExecutor] replSet elect res: { vote: 1, round: ObjectId('551c1b39ff257d5b3c9d1a58'), ok: 1.0 } 2015-04-01T16:22:17.209+0000 I REPL [ReplicationExecutor] replSet election succeeded, assuming primary role 2015-04-01T16:22:17.209+0000 I REPL [ReplicationExecutor] transition to PRIMARY 2015-04-01T16:22:18.013+0000 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:18.013+0000 D COMMAND [conn20] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:18.013+0000 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:159 locks:{} 0ms 2015-04-01T16:22:18.038+0000 D COMMAND [conn21] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:18.038+0000 D COMMAND [conn21] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:18.038+0000 I COMMAND [conn21] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:139 locks:{} 0ms 2015-04-01T16:22:18.049+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:18.049+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:18.049+0000 D REPL [ReplicationExecutor] Choosing to remain primary 2015-04-01T16:22:18.049+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:20.049Z 2015-04-01T16:22:18.740+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:18.740+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:22:18.740+0000 D NETWORK [ReplExecNetThread-0] connected to server localhost:27017 (127.0.0.1) 2015-04-01T16:22:18.744+0000 W NETWORK [ReplExecNetThread-0] The server certificate does not match the host name localhost 2015-04-01T16:22:19.175+0000 D NETWORK [rsBackgroundSync] SocketException: remote: 127.0.0.1:27017 error: 9001 socket exception [CLOSED] server [127.0.0.1:27017] 2015-04-01T16:22:19.175+0000 D - [rsBackgroundSync] User Assertion: 10278:dbclient error communicating with server: localhost:27017 2015-04-01T16:22:19.177+0000 D NETWORK [ReplExecNetThread-0] SocketException: remote: 127.0.0.1:27017 error: 9001 socket exception [CLOSED] server [127.0.0.1:27017] 2015-04-01T16:22:19.177+0000 I NETWORK [ReplExecNetThread-0] DBClientCursor::init call() failed 2015-04-01T16:22:19.177+0000 D - [ReplExecNetThread-0] User Assertion: 10276:DBClientBase::findN: transport error: localhost:27017 ns: local.$cmd query: { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D456276346E53694C5033537A6F7533766F326F43554B7A56384B524E6C4C6361) } 2015-04-01T16:22:19.178+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was Location10276 DBClientBase::findN: transport error: localhost:27017 ns: local.$cmd query: { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D456276346E53694C5033537A6F7533766F326F43554B7A56384B524E6C4C6361) } 2015-04-01T16:22:19.178+0000 I REPL [ReplicationExecutor] Error in heartbeat request to localhost:27017; Location10276 DBClientBase::findN: transport error: localhost:27017 ns: local.$cmd query: { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D456276346E53694C5033537A6F7533766F326F43554B7A56384B524E6C4C6361) } 2015-04-01T16:22:19.178+0000 D REPL [ReplicationExecutor] Bad heartbeat response from localhost:27017; trying again; Retries left: 1; 438ms have already elapsed 2015-04-01T16:22:19.178+0000 D REPL [ReplicationExecutor] Choosing to remain primary 2015-04-01T16:22:19.178+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:19.178Z 2015-04-01T16:22:19.178+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:19.179+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:22:19.179+0000 D NETWORK [ReplExecNetThread-0] connected to server localhost:27017 (127.0.0.1) 2015-04-01T16:22:19.183+0000 W NETWORK [ReplExecNetThread-0] The server certificate does not match the host name localhost 2015-04-01T16:22:19.222+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:19.222+0000 I REPL [ReplicationExecutor] Member localhost:27017 is now in state SECONDARY 2015-04-01T16:22:19.222+0000 I REPL [ReplicationExecutor] Stepping down self (priority 1.1) because localhost:27017 has higher priority 99 and is only 0 seconds behind me 2015-04-01T16:22:19.222+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:21.222Z 2015-04-01T16:22:19.222+0000 I REPL [ReplicationExecutor] Stepping down from primary in response to heartbeat 2015-04-01T16:22:19.222+0000 I REPL [replCallbackWithGlobalLock-0] transition to SECONDARY 2015-04-01T16:22:19.223+0000 D NETWORK [conn16] Socket recv() errno:10004 A blocking operation was interrupted by a call to WSACancelBlockingCall. 127.0.0.1:62982 2015-04-01T16:22:19.223+0000 D NETWORK [conn16] SocketException: remote: 127.0.0.1:62982 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:62982] 2015-04-01T16:22:19.223+0000 I NETWORK [conn16] end connection 127.0.0.1:62982 (3 connections now open) 2015-04-01T16:22:19.223+0000 D NETWORK [conn17] Socket recv() errno:10004 A blocking operation was interrupted by a call to WSACancelBlockingCall. 127.0.0.1:62986 2015-04-01T16:22:19.223+0000 D NETWORK [conn17] SocketException: remote: 127.0.0.1:62986 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:62986] 2015-04-01T16:22:19.223+0000 I NETWORK [conn17] end connection 127.0.0.1:62986 (3 connections now open) 2015-04-01T16:22:19.230+0000 E REPL [rsBackgroundSync] sync producer problem: 10278 dbclient error communicating with server: localhost:27017 2015-04-01T16:22:19.231+0000 I NETWORK [rsBackgroundSync] Socket send() errno:10038 An operation was attempted on something that is not a socket. 127.0.0.1:27017 2015-04-01T16:22:19.231+0000 I - [rsBackgroundSync] caught exception (socket exception [SEND_ERROR] for 127.0.0.1:27017) in destructor (mongo::PiggyBackData::~PiggyBackData) 2015-04-01T16:22:19.231+0000 I REPL [ReplicationExecutor] syncing from: localhost:27017 2015-04-01T16:22:19.232+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:22:19.232+0000 D NETWORK [rsBackgroundSync] connected to server localhost:27017 (127.0.0.1) 2015-04-01T16:22:19.406+0000 W NETWORK [rsBackgroundSync] The server certificate does not match the host name localhost 2015-04-01T16:22:19.615+0000 D REPL [rsBackgroundSync] repl: local.oplog.rs.find({ ts: { $gte: Timestamp 1427905319000|66 } }) 2015-04-01T16:22:19.615+0000 D REPL [SyncSourceFeedback] handshaking upstream updater 2015-04-01T16:22:19.615+0000 D REPL [SyncSourceFeedback] Sending to localhost:27017 (127.0.0.1) the replication handshake: { replSetUpdatePosition: 1, handshake: { handshake: ObjectId('551c1ab2ff257d5b3c9d1a53'), member: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } } 2015-04-01T16:22:19.616+0000 I NETWORK [SyncSourceFeedback] Socket send() errno:10038 An operation was attempted on something that is not a socket. 127.0.0.1:27017 2015-04-01T16:22:19.616+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:63022 #22 (3 connections now open) 2015-04-01T16:22:19.619+0000 W NETWORK [conn22] no SSL certificate provided by peer 2015-04-01T16:22:19.621+0000 D QUERY [conn22] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:22:19.621+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:19.621+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:19.621+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:19.621+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:19.622+0000 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending handshake: socket exception [SEND_ERROR] for 127.0.0.1:27017 2015-04-01T16:22:19.622+0000 D REPL [SyncSourceFeedback] resetting connection in sync source feedback 2015-04-01T16:22:19.622+0000 I REPL [SyncSourceFeedback] replset setting syncSourceFeedback to localhost:27017 2015-04-01T16:22:19.622+0000 D COMMAND [conn22] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D2D61346A615B4C4C3D415E6932762E53505A4239) } 2015-04-01T16:22:19.622+0000 D QUERY [conn22] Relevant index 0 is kp: { user: 1, db: 1 } io: { v: 1, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" } 2015-04-01T16:22:19.622+0000 D QUERY [conn22] Only one plan is available; it will be run but will not be cached. query: { user: "bob", db: "admin" } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { user: 1, db: 1 } 2015-04-01T16:22:19.623+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:22:19.623+0000 D NETWORK [SyncSourceFeedback] connected to server localhost:27017 (127.0.0.1) 2015-04-01T16:22:19.623+0000 I COMMAND [conn22] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D2D61346A615B4C4C3D415E6932762E53505A4239) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:167 locks:{} 1ms 2015-04-01T16:22:19.627+0000 W NETWORK [SyncSourceFeedback] The server certificate does not match the host name localhost 2015-04-01T16:22:19.660+0000 D REPL [SyncSourceFeedback] handshaking upstream updater 2015-04-01T16:22:19.660+0000 D REPL [SyncSourceFeedback] Sending to localhost:27017 (127.0.0.1) the replication handshake: { replSetUpdatePosition: 1, handshake: { handshake: ObjectId('551c1ab2ff257d5b3c9d1a53'), member: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } } 2015-04-01T16:22:19.661+0000 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('551c1ab2ff257d5b3c9d1a53'), optime: Timestamp 1427905319000|66, memberId: 1, cfgver: 1, config: { _id: 1, host: "localhost:27018", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.1, tags: { ordinal: "two", dc: "pa" }, slaveDelay: 0, votes: 1 } } ] } 2015-04-01T16:22:19.704+0000 D COMMAND [conn22] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D2D61346A615B4C4C3D415E6932762E53505A42396243785738514E6277726E4C6F6F5551785A734E316A4B433834524E4D3031712C703D6A4F4864707A...) } 2015-04-01T16:22:19.704+0000 I COMMAND [conn22] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D2D61346A615B4C4C3D415E6932762E53505A42396243785738514E6277726E4C6F6F5551785A734E316A4B433834524E4D3031712C703D6A4F4864707A...) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:22:19.704+0000 D COMMAND [conn22] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } 2015-04-01T16:22:19.704+0000 D QUERY [conn22] Relevant index 0 is kp: { user: 1, db: 1 } io: { v: 1, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" } 2015-04-01T16:22:19.704+0000 D QUERY [conn22] Only one plan is available; it will be run but will not be cached. query: { user: "bob", db: "admin" } sort: {} projection: {} skip: 0 limit: 0, planSummary: IXSCAN { user: 1, db: 1 } 2015-04-01T16:22:19.705+0000 I ACCESS [conn22] Successfully authenticated as principal bob on admin 2015-04-01T16:22:19.705+0000 I COMMAND [conn22] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:22:19.705+0000 D COMMAND [conn22] run command admin.$cmd { getLastError: 1 } 2015-04-01T16:22:19.705+0000 I COMMAND [conn22] command admin.$cmd command: getLastError { getLastError: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:110 locks:{} 0ms 2015-04-01T16:22:19.706+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:19.706+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:19.706+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:19.706+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:19.932+0000 I STORAGE [DataFileSync] flushing mmaps took 17860ms for 10 files 2015-04-01T16:22:20.013+0000 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:20.013+0000 D COMMAND [conn20] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:20.013+0000 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:20.014+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:20.014+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:20.014+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:20.014+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3cb5355f778169d064') } 2015-04-01T16:22:20.014+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3cb5355f778169d064') } 2015-04-01T16:22:20.014+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3cb5355f778169d064') } 2015-04-01T16:22:20.015+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 2 secs ago 2015-04-01T16:22:20.015+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3cb5355f778169d064') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:20.038+0000 D COMMAND [conn21] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:20.038+0000 D COMMAND [conn21] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:20.038+0000 I COMMAND [conn21] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:20.049+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:20.049+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:20.049+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:20.049+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:22.049Z 2015-04-01T16:22:20.216+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:20.216+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:20.216+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:20.216+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:20.554+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:20.554+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:20.555+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:20.555+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:20.831+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:20.831+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:20.831+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:20.832+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3cb5355f778169d065') } 2015-04-01T16:22:20.832+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3cb5355f778169d065') } 2015-04-01T16:22:20.832+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3cb5355f778169d065') } 2015-04-01T16:22:20.832+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 3 secs ago 2015-04-01T16:22:20.832+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3cb5355f778169d065') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:21.003+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:21.003+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:21.003+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:21.004+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3db5355f778169d066') } 2015-04-01T16:22:21.004+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3db5355f778169d066') } 2015-04-01T16:22:21.004+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3db5355f778169d066') } 2015-04-01T16:22:21.004+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 3 secs ago 2015-04-01T16:22:21.004+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3db5355f778169d066') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:21.055+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:21.055+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:21.056+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:21.056+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:21.222+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:21.222+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:21.222+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:21.222+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:23.222Z 2015-04-01T16:22:21.556+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:21.556+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:21.557+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:21.557+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:21.894+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:21.894+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:21.894+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:21.896+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3db5355f778169d067') } 2015-04-01T16:22:21.896+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3db5355f778169d067') } 2015-04-01T16:22:21.896+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3db5355f778169d067') } 2015-04-01T16:22:21.896+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 4 secs ago 2015-04-01T16:22:21.896+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3db5355f778169d067') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:22.013+0000 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:22.013+0000 D COMMAND [conn20] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:22.013+0000 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:22.038+0000 D COMMAND [conn21] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:22.038+0000 D COMMAND [conn21] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:22.038+0000 I COMMAND [conn21] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:22.049+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:22.049+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:22:22.049+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:22.049+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:22.049+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:24.049Z 2015-04-01T16:22:22.068+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:22.068+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:22.069+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:22.069+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:22.569+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:22.569+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:22.570+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:22.570+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:22.581+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:22.581+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:22.581+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:22.582+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3eb5355f778169d068') } 2015-04-01T16:22:22.582+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3eb5355f778169d068') } 2015-04-01T16:22:22.582+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3eb5355f778169d068') } 2015-04-01T16:22:22.582+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 5 secs ago 2015-04-01T16:22:22.582+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3eb5355f778169d068') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:22.880+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:22.880+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:22.880+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:22.881+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3eb5355f778169d069') } 2015-04-01T16:22:22.881+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3eb5355f778169d069') } 2015-04-01T16:22:22.881+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3eb5355f778169d069') } 2015-04-01T16:22:22.881+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 5 secs ago 2015-04-01T16:22:22.881+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3eb5355f778169d069') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:23.069+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:23.069+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:23.070+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:23.070+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:23.222+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:23.222+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:23.222+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:23.222+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:25.222Z 2015-04-01T16:22:23.579+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:23.579+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:23.579+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:23.580+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06a') } 2015-04-01T16:22:23.580+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06a') } 2015-04-01T16:22:23.580+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06a') } 2015-04-01T16:22:23.580+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 6 secs ago 2015-04-01T16:22:23.580+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06a') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:23.582+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:23.582+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:23.582+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:23.583+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:23.693+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:23.693+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:23.693+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:23.694+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06b') } 2015-04-01T16:22:23.694+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06b') } 2015-04-01T16:22:23.694+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06b') } 2015-04-01T16:22:23.694+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 6 secs ago 2015-04-01T16:22:23.694+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06b') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:23.889+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:23.889+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:23.889+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:23.890+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06c') } 2015-04-01T16:22:23.890+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06c') } 2015-04-01T16:22:23.890+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06c') } 2015-04-01T16:22:23.890+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 6 secs ago 2015-04-01T16:22:23.890+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06c') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:23.983+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:23.983+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:23.983+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:23.984+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06d') } 2015-04-01T16:22:23.984+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06d') } 2015-04-01T16:22:23.984+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06d') } 2015-04-01T16:22:23.984+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 6 secs ago 2015-04-01T16:22:23.984+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b3fb5355f778169d06d') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:24.014+0000 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:24.014+0000 D COMMAND [conn20] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:24.014+0000 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:24.039+0000 D COMMAND [conn21] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:24.039+0000 D COMMAND [conn21] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:24.039+0000 I COMMAND [conn21] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:24.049+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:24.049+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:24.049+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:24.049+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:26.049Z 2015-04-01T16:22:24.096+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:24.096+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:24.096+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:24.097+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:24.407+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:63027 #23 (4 connections now open) 2015-04-01T16:22:24.415+0000 D QUERY [conn23] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:22:24.415+0000 D COMMAND [conn23] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:24.416+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:24.416+0000 D COMMAND [conn23] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:24.416+0000 I COMMAND [conn23] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:24.417+0000 D COMMAND [conn23] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D64625D326957342124463E2546495A3F476D4550) } 2015-04-01T16:22:24.417+0000 I COMMAND [conn23] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D64625D326957342124463E2546495A3F476D4550) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:167 locks:{} 0ms 2015-04-01T16:22:24.425+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:24.425+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:24.425+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:24.426+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b40b5355f778169d06e') } 2015-04-01T16:22:24.427+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b40b5355f778169d06e') } 2015-04-01T16:22:24.427+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b40b5355f778169d06e') } 2015-04-01T16:22:24.427+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 7 secs ago 2015-04-01T16:22:24.427+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b40b5355f778169d06e') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:24.490+0000 D COMMAND [conn23] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D64625D326957342124463E2546495A3F476D45506D5133664D627633694E426B625939426D4E5055566E515067316F32316A4B422C703D563852326552...) } 2015-04-01T16:22:24.491+0000 I COMMAND [conn23] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D64625D326957342124463E2546495A3F476D45506D5133664D627633694E426B625939426D4E5055566E515067316F32316A4B422C703D563852326552...) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:22:24.491+0000 D COMMAND [conn23] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } 2015-04-01T16:22:24.491+0000 I ACCESS [conn23] Successfully authenticated as principal bob on admin 2015-04-01T16:22:24.491+0000 I COMMAND [conn23] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:22:24.492+0000 D COMMAND [conn23] run command admin.$cmd { getLastError: 1 } 2015-04-01T16:22:24.492+0000 I COMMAND [conn23] command admin.$cmd command: getLastError { getLastError: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:110 locks:{} 0ms 2015-04-01T16:22:24.492+0000 D COMMAND [conn23] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:24.492+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:24.493+0000 D COMMAND [conn23] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:24.493+0000 I COMMAND [conn23] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:24.597+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:24.597+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:24.597+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:24.598+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:25.098+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:25.098+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:25.099+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:25.099+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:25.222+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:25.222+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:22:25.222+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:25.222+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:25.222+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:27.222Z 2015-04-01T16:22:25.410+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:25.410+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:25.410+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:25.411+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b41b5355f778169d06f') } 2015-04-01T16:22:25.411+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b41b5355f778169d06f') } 2015-04-01T16:22:25.411+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b41b5355f778169d06f') } 2015-04-01T16:22:25.411+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 8 secs ago 2015-04-01T16:22:25.411+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b41b5355f778169d06f') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:25.609+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:25.609+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:25.610+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:25.610+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:26.014+0000 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:26.014+0000 D COMMAND [conn20] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:26.014+0000 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:26.039+0000 D COMMAND [conn21] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:26.039+0000 D COMMAND [conn21] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:26.039+0000 I COMMAND [conn21] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:26.049+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:26.049+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:26.049+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:26.049+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:28.049Z 2015-04-01T16:22:26.110+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:26.110+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:26.111+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:26.111+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:26.204+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:26.204+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:26.204+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:26.205+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b42b5355f778169d070') } 2015-04-01T16:22:26.205+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b42b5355f778169d070') } 2015-04-01T16:22:26.205+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b42b5355f778169d070') } 2015-04-01T16:22:26.205+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 8 secs ago 2015-04-01T16:22:26.205+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b42b5355f778169d070') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:26.623+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:26.623+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:26.623+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:26.624+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:26.857+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:26.857+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:26.857+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:26.858+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b42b5355f778169d071') } 2015-04-01T16:22:26.858+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b42b5355f778169d071') } 2015-04-01T16:22:26.858+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b42b5355f778169d071') } 2015-04-01T16:22:26.858+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 9 secs ago 2015-04-01T16:22:26.858+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b42b5355f778169d071') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:27.024+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:27.024+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:27.024+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:27.025+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b43b5355f778169d072') } 2015-04-01T16:22:27.025+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b43b5355f778169d072') } 2015-04-01T16:22:27.025+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b43b5355f778169d072') } 2015-04-01T16:22:27.025+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 9 secs ago 2015-04-01T16:22:27.025+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b43b5355f778169d072') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:27.124+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:27.124+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:27.125+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:27.125+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:27.222+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:27.222+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:27.222+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:27.222+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:29.222Z 2015-04-01T16:22:27.637+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:27.637+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:27.637+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:27.638+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:27.764+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:27.764+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:27.764+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:27.765+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b43b5355f778169d073') } 2015-04-01T16:22:27.765+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b43b5355f778169d073') } 2015-04-01T16:22:27.765+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b43b5355f778169d073') } 2015-04-01T16:22:27.765+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 10 secs ago 2015-04-01T16:22:27.765+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b43b5355f778169d073') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:28.014+0000 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:28.014+0000 D COMMAND [conn20] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:28.014+0000 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:28.039+0000 D COMMAND [conn21] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:28.039+0000 D COMMAND [conn21] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:28.039+0000 I COMMAND [conn21] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:28.049+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:28.049+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:22:28.049+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:28.049+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:28.049+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:30.049Z 2015-04-01T16:22:28.138+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:28.138+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:28.138+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:28.139+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:28.639+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:28.639+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:28.640+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:28.640+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:28.789+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:28.789+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:28.789+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:28.790+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b44b5355f778169d074') } 2015-04-01T16:22:28.790+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b44b5355f778169d074') } 2015-04-01T16:22:28.790+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b44b5355f778169d074') } 2015-04-01T16:22:28.790+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 11 secs ago 2015-04-01T16:22:28.790+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b44b5355f778169d074') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:29.150+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:29.150+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:29.151+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:29.151+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:29.206+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:29.206+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:29.206+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:29.207+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b45b5355f778169d075') } 2015-04-01T16:22:29.207+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b45b5355f778169d075') } 2015-04-01T16:22:29.207+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b45b5355f778169d075') } 2015-04-01T16:22:29.207+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 11 secs ago 2015-04-01T16:22:29.207+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b45b5355f778169d075') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:29.223+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:29.223+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:29.223+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:29.223+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:31.223Z 2015-04-01T16:22:29.651+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:29.651+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:29.652+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:29.652+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:29.815+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:29.815+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:29.815+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:29.816+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b45b5355f778169d076') } 2015-04-01T16:22:29.816+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b45b5355f778169d076') } 2015-04-01T16:22:29.816+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b45b5355f778169d076') } 2015-04-01T16:22:29.816+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 12 secs ago 2015-04-01T16:22:29.816+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b45b5355f778169d076') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:30.014+0000 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:30.014+0000 D COMMAND [conn20] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:30.014+0000 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:30.039+0000 D COMMAND [conn21] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:30.039+0000 D COMMAND [conn21] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:30.039+0000 I COMMAND [conn21] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:30.049+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:30.049+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:30.049+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:30.049+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:32.049Z 2015-04-01T16:22:30.167+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:30.167+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:30.168+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:30.168+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:30.555+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:30.555+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:30.556+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:30.556+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:30.750+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:30.750+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:30.750+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:30.751+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b46b5355f778169d077') } 2015-04-01T16:22:30.751+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b46b5355f778169d077') } 2015-04-01T16:22:30.751+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b46b5355f778169d077') } 2015-04-01T16:22:30.751+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 13 secs ago 2015-04-01T16:22:30.751+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b46b5355f778169d077') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:30.932+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:30.932+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:30.932+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:30.933+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b46b5355f778169d078') } 2015-04-01T16:22:30.933+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b46b5355f778169d078') } 2015-04-01T16:22:30.933+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b46b5355f778169d078') } 2015-04-01T16:22:30.933+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 13 secs ago 2015-04-01T16:22:30.933+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b46b5355f778169d078') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:31.056+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:31.056+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:31.057+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:31.057+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:31.223+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:31.223+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:22:31.223+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:31.223+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:31.223+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:33.223Z 2015-04-01T16:22:31.251+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:31.251+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:31.251+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:31.251+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b47b5355f778169d079') } 2015-04-01T16:22:31.252+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b47b5355f778169d079') } 2015-04-01T16:22:31.252+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b47b5355f778169d079') } 2015-04-01T16:22:31.252+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 14 secs ago 2015-04-01T16:22:31.252+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b47b5355f778169d079') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:31.569+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:31.569+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:31.570+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:31.570+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:31.892+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:31.892+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:31.892+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:31.893+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b47b5355f778169d07a') } 2015-04-01T16:22:31.893+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b47b5355f778169d07a') } 2015-04-01T16:22:31.893+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b47b5355f778169d07a') } 2015-04-01T16:22:31.893+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 14 secs ago 2015-04-01T16:22:31.893+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b47b5355f778169d07a') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:32.014+0000 D COMMAND [conn20] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:32.014+0000 D COMMAND [conn20] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:32.014+0000 I COMMAND [conn20] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:32.039+0000 D COMMAND [conn21] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:32.039+0000 D COMMAND [conn21] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:32.039+0000 I COMMAND [conn21] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:32.049+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:32.049+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:32.049+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:32.049+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:34.049Z 2015-04-01T16:22:32.083+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:32.083+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:32.084+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:32.084+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:32.584+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:32.584+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:32.585+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:32.585+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:32.851+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:32.851+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:32.851+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:32.852+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b48b5355f778169d07b') } 2015-04-01T16:22:32.852+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b48b5355f778169d07b') } 2015-04-01T16:22:32.852+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b48b5355f778169d07b') } 2015-04-01T16:22:32.852+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 15 secs ago 2015-04-01T16:22:32.852+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b48b5355f778169d07b') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:32.996+0000 D COMMAND [conn20] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:32.996+0000 D COMMAND [conn20] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:32.996+0000 I COMMAND [conn20] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:32.997+0000 D COMMAND [conn20] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b48b5355f778169d07c') } 2015-04-01T16:22:32.997+0000 D COMMAND [conn20] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b48b5355f778169d07c') } 2015-04-01T16:22:32.997+0000 D COMMAND [conn20] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b48b5355f778169d07c') } 2015-04-01T16:22:32.997+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 15 secs ago 2015-04-01T16:22:32.997+0000 I COMMAND [conn20] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b48b5355f778169d07c') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:33.097+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:33.097+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:33.098+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:33.098+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:33.224+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:33.224+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:33.224+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:33.224+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:35.224Z 2015-04-01T16:22:33.612+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:33.612+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:33.613+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:33.613+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:33.797+0000 D NETWORK [conn20] SocketException: remote: 127.0.0.1:63005 error: 9001 socket exception [CLOSED] server [127.0.0.1:63005] 2015-04-01T16:22:33.797+0000 I NETWORK [conn20] end connection 127.0.0.1:63005 (3 connections now open) 2015-04-01T16:22:33.798+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:63030 #24 (4 connections now open) 2015-04-01T16:22:33.800+0000 D QUERY [conn24] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:22:33.800+0000 D COMMAND [conn24] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4F59327A724F684D3363463672786C4E396F456E2F5430653967537A41624A64) } 2015-04-01T16:22:33.800+0000 I COMMAND [conn24] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4F59327A724F684D3363463672786C4E396F456E2F5430653967537A41624A64) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:22:33.821+0000 D COMMAND [conn24] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D4F59327A724F684D3363463672786C4E396F456E2F5430653967537A41624A646A6F32677459685364576977426B6843494651526B6D476D53354F5248...), conversationId: 1 } 2015-04-01T16:22:33.821+0000 I COMMAND [conn24] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4F59327A724F684D3363463672786C4E396F456E2F5430653967537A41624A646A6F32677459685364576977426B6843494651526B6D476D53354F5248...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:22:33.821+0000 D COMMAND [conn24] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:22:33.821+0000 I ACCESS [conn24] Successfully authenticated as principal __system on local 2015-04-01T16:22:33.821+0000 I COMMAND [conn24] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:22:33.821+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:33.821+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:33.821+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:33.822+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b49b5355f778169d07d') } 2015-04-01T16:22:33.822+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b49b5355f778169d07d') } 2015-04-01T16:22:33.822+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b49b5355f778169d07d') } 2015-04-01T16:22:33.822+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 16 secs ago 2015-04-01T16:22:33.822+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b49b5355f778169d07d') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:34.014+0000 D COMMAND [conn24] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:34.014+0000 D COMMAND [conn24] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:34.014+0000 I COMMAND [conn24] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:34.039+0000 D COMMAND [conn21] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:34.039+0000 D COMMAND [conn21] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:34.039+0000 I COMMAND [conn21] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:34.049+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:34.049+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:22:34.049+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:34.049+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:34.049+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:36.049Z 2015-04-01T16:22:34.127+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:34.127+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:34.128+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:34.128+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:34.282+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:34.282+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:34.282+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:34.283+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4ab5355f778169d07e') } 2015-04-01T16:22:34.283+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4ab5355f778169d07e') } 2015-04-01T16:22:34.283+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4ab5355f778169d07e') } 2015-04-01T16:22:34.283+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 17 secs ago 2015-04-01T16:22:34.283+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4ab5355f778169d07e') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:34.393+0000 D COMMAND [conn23] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:34.394+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:34.395+0000 D COMMAND [conn23] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:34.395+0000 I COMMAND [conn23] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:34.628+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:34.628+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:34.629+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:34.629+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:34.969+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:34.969+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:34.969+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:34.970+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4ab5355f778169d07f') } 2015-04-01T16:22:34.970+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4ab5355f778169d07f') } 2015-04-01T16:22:34.970+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4ab5355f778169d07f') } 2015-04-01T16:22:34.970+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 17 secs ago 2015-04-01T16:22:34.970+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4ab5355f778169d07f') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:35.129+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:35.129+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:35.130+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:35.130+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:35.224+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:35.224+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:35.224+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:35.224+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:37.224Z 2015-04-01T16:22:35.630+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:35.630+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:35.631+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:35.631+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:35.659+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:35.659+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:35.659+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:35.660+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4bb5355f778169d080') } 2015-04-01T16:22:35.660+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4bb5355f778169d080') } 2015-04-01T16:22:35.660+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4bb5355f778169d080') } 2015-04-01T16:22:35.660+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 18 secs ago 2015-04-01T16:22:35.660+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4bb5355f778169d080') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:35.982+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:35.982+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:35.982+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:35.983+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4bb5355f778169d081') } 2015-04-01T16:22:35.983+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4bb5355f778169d081') } 2015-04-01T16:22:35.983+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4bb5355f778169d081') } 2015-04-01T16:22:35.983+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 18 secs ago 2015-04-01T16:22:35.983+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4bb5355f778169d081') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:36.014+0000 D COMMAND [conn24] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:36.014+0000 D COMMAND [conn24] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:36.014+0000 I COMMAND [conn24] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:36.039+0000 D COMMAND [conn21] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:36.039+0000 D COMMAND [conn21] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:36.039+0000 I COMMAND [conn21] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:36.049+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:36.049+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:22:36.049+0000 D NETWORK [ReplExecNetThread-2] connected to server localhost:27019 (127.0.0.1) 2015-04-01T16:22:36.052+0000 W NETWORK [ReplExecNetThread-2] The server certificate does not match the host name localhost 2015-04-01T16:22:36.087+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:36.087+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:36.087+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:38.087Z 2015-04-01T16:22:36.134+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:36.134+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:36.135+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:36.135+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:36.296+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:36.296+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:36.296+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:36.297+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4cb5355f778169d082') } 2015-04-01T16:22:36.297+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4cb5355f778169d082') } 2015-04-01T16:22:36.297+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4cb5355f778169d082') } 2015-04-01T16:22:36.297+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 19 secs ago 2015-04-01T16:22:36.297+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4cb5355f778169d082') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:36.646+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:36.646+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:36.647+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:36.647+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:36.670+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:36.670+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:36.670+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:36.671+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4cb5355f778169d083') } 2015-04-01T16:22:36.671+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4cb5355f778169d083') } 2015-04-01T16:22:36.671+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4cb5355f778169d083') } 2015-04-01T16:22:36.671+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 19 secs ago 2015-04-01T16:22:36.671+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4cb5355f778169d083') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:36.999+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:36.999+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:36.999+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:37.000+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4db5355f778169d084') } 2015-04-01T16:22:37.000+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4db5355f778169d084') } 2015-04-01T16:22:37.000+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4db5355f778169d084') } 2015-04-01T16:22:37.000+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 19 secs ago 2015-04-01T16:22:37.000+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4db5355f778169d084') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:37.126+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:37.126+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:37.126+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:37.127+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4db5355f778169d085') } 2015-04-01T16:22:37.127+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4db5355f778169d085') } 2015-04-01T16:22:37.127+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4db5355f778169d085') } 2015-04-01T16:22:37.127+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 19 secs ago 2015-04-01T16:22:37.127+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4db5355f778169d085') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:37.155+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:37.155+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:37.156+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:37.156+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:37.224+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:37.224+0000 D NETWORK [ReplExecNetThread-2] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:22:37.224+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:37.224+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:37.224+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:39.224Z 2015-04-01T16:22:37.498+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:37.498+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:37.498+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:37.499+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4db5355f778169d086') } 2015-04-01T16:22:37.499+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4db5355f778169d086') } 2015-04-01T16:22:37.499+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4db5355f778169d086') } 2015-04-01T16:22:37.499+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 20 secs ago 2015-04-01T16:22:37.499+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4db5355f778169d086') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:37.668+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:37.668+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:37.669+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:37.669+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:38.014+0000 D COMMAND [conn24] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:38.014+0000 D COMMAND [conn24] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:38.014+0000 I COMMAND [conn24] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:38.039+0000 D NETWORK [conn21] SocketException: remote: 127.0.0.1:63010 error: 9001 socket exception [CLOSED] server [127.0.0.1:63010] 2015-04-01T16:22:38.039+0000 I NETWORK [conn21] end connection 127.0.0.1:63010 (3 connections now open) 2015-04-01T16:22:38.039+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:63033 #25 (4 connections now open) 2015-04-01T16:22:38.042+0000 D QUERY [conn25] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:22:38.042+0000 D COMMAND [conn25] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D616B4764355A4F67676F2B31514D70724F4C5A6B35643672624C3253516C5A59) } 2015-04-01T16:22:38.042+0000 I COMMAND [conn25] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D616B4764355A4F67676F2B31514D70724F4C5A6B35643672624C3253516C5A59) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:22:38.058+0000 D COMMAND [conn25] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D616B4764355A4F67676F2B31514D70724F4C5A6B35643672624C3253516C5A595A4E526842594168306161417261556F3761514C692B70517937565471...), conversationId: 1 } 2015-04-01T16:22:38.058+0000 I COMMAND [conn25] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D616B4764355A4F67676F2B31514D70724F4C5A6B35643672624C3253516C5A595A4E526842594168306161417261556F3761514C692B70517937565471...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:22:38.058+0000 D COMMAND [conn25] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:22:38.058+0000 I ACCESS [conn25] Successfully authenticated as principal __system on local 2015-04-01T16:22:38.058+0000 I COMMAND [conn25] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:22:38.058+0000 D COMMAND [conn25] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:38.058+0000 D COMMAND [conn25] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:38.058+0000 I COMMAND [conn25] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:38.087+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:38.087+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:38.087+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:38.087+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:40.087Z 2015-04-01T16:22:38.169+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:38.169+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:38.170+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:38.170+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:38.357+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:38.357+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:38.357+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:38.358+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4eb5355f778169d087') } 2015-04-01T16:22:38.358+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4eb5355f778169d087') } 2015-04-01T16:22:38.358+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4eb5355f778169d087') } 2015-04-01T16:22:38.358+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 21 secs ago 2015-04-01T16:22:38.358+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4eb5355f778169d087') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:38.556+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:38.556+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:38.556+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:38.557+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4eb5355f778169d088') } 2015-04-01T16:22:38.557+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4eb5355f778169d088') } 2015-04-01T16:22:38.557+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4eb5355f778169d088') } 2015-04-01T16:22:38.557+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 21 secs ago 2015-04-01T16:22:38.557+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4eb5355f778169d088') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:38.682+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:38.682+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:38.683+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:38.683+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:39.184+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:39.184+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:39.185+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:39.185+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:39.215+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:39.215+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:39.215+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:39.216+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4fb5355f778169d089') } 2015-04-01T16:22:39.216+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4fb5355f778169d089') } 2015-04-01T16:22:39.216+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4fb5355f778169d089') } 2015-04-01T16:22:39.216+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 22 secs ago 2015-04-01T16:22:39.216+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b4fb5355f778169d089') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:39.225+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:39.225+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:39.225+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:39.225+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:41.225Z 2015-04-01T16:22:39.697+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:39.697+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:39.698+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:39.698+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:40.006+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:40.006+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:40.006+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:40.007+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b50b5355f778169d08a') } 2015-04-01T16:22:40.007+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b50b5355f778169d08a') } 2015-04-01T16:22:40.007+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b50b5355f778169d08a') } 2015-04-01T16:22:40.007+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 22 secs ago 2015-04-01T16:22:40.007+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b50b5355f778169d08a') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:40.015+0000 D COMMAND [conn24] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:40.015+0000 D COMMAND [conn24] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:40.015+0000 I COMMAND [conn24] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:40.060+0000 D COMMAND [conn25] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:40.060+0000 D COMMAND [conn25] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:40.060+0000 I COMMAND [conn25] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:40.087+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:40.087+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:40.087+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:40.087+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:42.087Z 2015-04-01T16:22:40.198+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:40.198+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:40.199+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:40.199+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:40.555+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:40.555+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:40.555+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:40.556+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:40.888+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:40.888+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:40.888+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:40.889+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b50b5355f778169d08b') } 2015-04-01T16:22:40.889+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b50b5355f778169d08b') } 2015-04-01T16:22:40.889+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b50b5355f778169d08b') } 2015-04-01T16:22:40.889+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 23 secs ago 2015-04-01T16:22:40.889+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b50b5355f778169d08b') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:41.070+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:41.070+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:41.072+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:41.072+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:41.225+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:41.225+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:41.225+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:41.225+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:43.225Z 2015-04-01T16:22:41.412+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:41.412+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:41.412+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:41.413+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b51b5355f778169d08c') } 2015-04-01T16:22:41.413+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b51b5355f778169d08c') } 2015-04-01T16:22:41.413+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b51b5355f778169d08c') } 2015-04-01T16:22:41.413+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 24 secs ago 2015-04-01T16:22:41.413+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b51b5355f778169d08c') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:41.572+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:41.572+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:41.573+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:41.573+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:41.961+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:41.961+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:41.961+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:41.962+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b51b5355f778169d08d') } 2015-04-01T16:22:41.962+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b51b5355f778169d08d') } 2015-04-01T16:22:41.962+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b51b5355f778169d08d') } 2015-04-01T16:22:41.962+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 24 secs ago 2015-04-01T16:22:41.962+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b51b5355f778169d08d') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:42.015+0000 D COMMAND [conn24] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:42.015+0000 D COMMAND [conn24] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:42.015+0000 I COMMAND [conn24] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:42.060+0000 D COMMAND [conn25] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:42.060+0000 D COMMAND [conn25] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:42.060+0000 I COMMAND [conn25] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:42.073+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:42.073+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:42.073+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:42.074+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:42.087+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:42.087+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:22:42.087+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:42.087+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:42.087+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:44.087Z 2015-04-01T16:22:42.203+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:42.203+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:42.203+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:42.204+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b52b5355f778169d08e') } 2015-04-01T16:22:42.204+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b52b5355f778169d08e') } 2015-04-01T16:22:42.204+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b52b5355f778169d08e') } 2015-04-01T16:22:42.204+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 24 secs ago 2015-04-01T16:22:42.204+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b52b5355f778169d08e') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:42.583+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:42.583+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:42.584+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:42.584+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:42.868+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:42.868+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:42.868+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:42.869+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b52b5355f778169d08f') } 2015-04-01T16:22:42.869+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b52b5355f778169d08f') } 2015-04-01T16:22:42.869+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b52b5355f778169d08f') } 2015-04-01T16:22:42.869+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 25 secs ago 2015-04-01T16:22:42.869+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b52b5355f778169d08f') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:43.084+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:43.084+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:43.085+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:43.085+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:43.225+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:43.225+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:22:43.225+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:43.225+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:43.225+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:45.225Z 2015-04-01T16:22:43.596+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:43.596+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:43.597+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:43.597+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:43.766+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:43.766+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:43.766+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:43.767+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b53b5355f778169d090') } 2015-04-01T16:22:43.767+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b53b5355f778169d090') } 2015-04-01T16:22:43.767+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b53b5355f778169d090') } 2015-04-01T16:22:43.767+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 26 secs ago 2015-04-01T16:22:43.767+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b53b5355f778169d090') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:44.015+0000 D COMMAND [conn24] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:44.015+0000 D COMMAND [conn24] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:44.015+0000 I COMMAND [conn24] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:44.060+0000 D COMMAND [conn25] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:44.060+0000 D COMMAND [conn25] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:44.060+0000 I COMMAND [conn25] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:44.087+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:44.087+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:44.087+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:44.087+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:46.087Z 2015-04-01T16:22:44.097+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:44.098+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:44.098+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:44.098+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:44.310+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:44.310+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:44.310+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:44.311+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b54b5355f778169d091') } 2015-04-01T16:22:44.311+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b54b5355f778169d091') } 2015-04-01T16:22:44.311+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b54b5355f778169d091') } 2015-04-01T16:22:44.311+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 27 secs ago 2015-04-01T16:22:44.311+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b54b5355f778169d091') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:44.396+0000 D COMMAND [conn23] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:44.396+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:44.397+0000 D COMMAND [conn23] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:44.397+0000 I COMMAND [conn23] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:44.508+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:44.508+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:44.508+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:44.509+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b54b5355f778169d092') } 2015-04-01T16:22:44.509+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b54b5355f778169d092') } 2015-04-01T16:22:44.509+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b54b5355f778169d092') } 2015-04-01T16:22:44.509+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 27 secs ago 2015-04-01T16:22:44.509+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b54b5355f778169d092') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:44.598+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:44.598+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:44.599+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:44.599+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:44.940+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:44.940+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:44.940+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:44.941+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b54b5355f778169d093') } 2015-04-01T16:22:44.941+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b54b5355f778169d093') } 2015-04-01T16:22:44.941+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b54b5355f778169d093') } 2015-04-01T16:22:44.941+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 27 secs ago 2015-04-01T16:22:44.941+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b54b5355f778169d093') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:45.099+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:45.099+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:45.100+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:45.100+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:45.225+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:45.225+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:45.225+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:45.225+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:47.225Z 2015-04-01T16:22:45.340+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:45.340+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:45.340+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:45.341+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b55b5355f778169d094') } 2015-04-01T16:22:45.341+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b55b5355f778169d094') } 2015-04-01T16:22:45.341+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b55b5355f778169d094') } 2015-04-01T16:22:45.341+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 28 secs ago 2015-04-01T16:22:45.341+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b55b5355f778169d094') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:45.600+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:45.600+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:45.601+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:45.601+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:45.845+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:45.845+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:45.845+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:45.846+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b55b5355f778169d095') } 2015-04-01T16:22:45.846+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b55b5355f778169d095') } 2015-04-01T16:22:45.846+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b55b5355f778169d095') } 2015-04-01T16:22:45.846+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 28 secs ago 2015-04-01T16:22:45.846+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b55b5355f778169d095') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:46.015+0000 D COMMAND [conn24] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:46.015+0000 D COMMAND [conn24] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:46.015+0000 I COMMAND [conn24] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:46.060+0000 D COMMAND [conn25] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:46.060+0000 D COMMAND [conn25] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:46.060+0000 I COMMAND [conn25] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:46.087+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:46.087+0000 D REPL [ReplExecNetThread-2] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:46.087+0000 D REPL [ReplExecNetThread-2] thread shutting down 2015-04-01T16:22:46.087+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:46.087+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:48.087Z 2015-04-01T16:22:46.110+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:46.110+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:46.111+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:46.111+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:46.623+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:46.623+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:46.624+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:46.624+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:46.795+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:46.795+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:46.795+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:46.796+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b56b5355f778169d096') } 2015-04-01T16:22:46.796+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b56b5355f778169d096') } 2015-04-01T16:22:46.796+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b56b5355f778169d096') } 2015-04-01T16:22:46.796+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 29 secs ago 2015-04-01T16:22:46.796+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b56b5355f778169d096') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:47.054+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:47.054+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:47.054+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:47.055+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b57b5355f778169d097') } 2015-04-01T16:22:47.055+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b57b5355f778169d097') } 2015-04-01T16:22:47.055+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b57b5355f778169d097') } 2015-04-01T16:22:47.055+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 29 secs ago 2015-04-01T16:22:47.055+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b57b5355f778169d097') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:47.136+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:47.136+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:47.137+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:47.137+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:47.155+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:47.155+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:47.155+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:47.156+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b57b5355f778169d098') } 2015-04-01T16:22:47.156+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b57b5355f778169d098') } 2015-04-01T16:22:47.156+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b57b5355f778169d098') } 2015-04-01T16:22:47.156+0000 I REPL [ReplicationExecutor] replSet voting no for localhost:27017; voted for localhost:27018 29 secs ago 2015-04-01T16:22:47.156+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b57b5355f778169d098') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:47.225+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:47.225+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:47.225+0000 D REPL [ReplicationExecutor] Not standing for election because member is more than 10 seconds behind the most up-to-date member (mask 0x2); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:47.225+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:49.225Z 2015-04-01T16:22:47.229+0000 D COMMAND [conn25] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27019", cfgver: 1, id: 2 } 2015-04-01T16:22:47.231+0000 D COMMAND [conn25] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27019", cfgver: 1, id: 2 } 2015-04-01T16:22:47.231+0000 I COMMAND [conn25] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27019", cfgver: 1, id: 2 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:170 locks:{} 0ms 2015-04-01T16:22:47.321+0000 D COMMAND [conn24] run command admin.$cmd { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:47.321+0000 D COMMAND [conn24] command: { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } 2015-04-01T16:22:47.321+0000 I COMMAND [conn24] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "repl0", opTime: new Date(6132806732788793345), who: "localhost:27017", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:70 locks:{} 0ms 2015-04-01T16:22:47.322+0000 D COMMAND [conn24] run command admin.$cmd { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b57b5355f778169d099') } 2015-04-01T16:22:47.322+0000 D COMMAND [conn24] command: { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b57b5355f778169d099') } 2015-04-01T16:22:47.322+0000 D COMMAND [conn24] replSet received elect msg { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b57b5355f778169d099') } 2015-04-01T16:22:47.322+0000 I REPL [ReplicationExecutor] replSetElect voting yea for localhost:27017 (0) 2015-04-01T16:22:47.322+0000 I COMMAND [conn24] command admin.$cmd command: replSetElect { replSetElect: 1, set: "repl0", who: "localhost:27017", whoid: 0, cfgver: 1, round: ObjectId('551c1b57b5355f778169d099') } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:66 locks:{} 0ms 2015-04-01T16:22:47.638+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:47.638+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:47.638+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:47.639+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:48.015+0000 D COMMAND [conn24] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:48.015+0000 D COMMAND [conn24] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:48.015+0000 I COMMAND [conn24] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:48.060+0000 D COMMAND [conn25] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:48.060+0000 D COMMAND [conn25] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:48.060+0000 I COMMAND [conn25] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:48.087+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:48.087+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:22:48.087+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:48.087+0000 D REPL [ReplicationExecutor] Not standing for election because I recently voted for localhost:27017; member is more than 10 seconds behind the most up-to-date member (mask 0x102); my last optime is 551c1b27:42 and the newest is 551c1b3b:1 2015-04-01T16:22:48.087+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:50.087Z 2015-04-01T16:22:48.151+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:48.151+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:48.151+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:48.152+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:48.652+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:48.652+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:48.653+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:48.653+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:48.653+0000 D COMMAND [conn23] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:48.654+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:369 locks:{} 0ms 2015-04-01T16:22:48.657+0000 D COMMAND [conn23] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:48.657+0000 I COMMAND [conn23] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:48.673+0000 D REPL [rsBackgroundSync] bgsync buffer has 530 bytes 2015-04-01T16:22:48.815+0000 D REPL [rsBackgroundSync] bgsync buffer has 2095 bytes 2015-04-01T16:22:49.225+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:49.225+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:22:49.225+0000 D NETWORK [ReplExecNetThread-0] connected to server localhost:27017 (127.0.0.1) 2015-04-01T16:22:49.228+0000 W NETWORK [ReplExecNetThread-0] The server certificate does not match the host name localhost 2015-04-01T16:22:49.312+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:49.312+0000 I REPL [ReplicationExecutor] Member localhost:27017 is now in state PRIMARY 2015-04-01T16:22:49.312+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:51.312Z 2015-04-01T16:22:49.349+0000 D REPL [rsBackgroundSync] bgsync buffer has 5032 bytes 2015-04-01T16:22:50.015+0000 D COMMAND [conn24] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:50.015+0000 D COMMAND [conn24] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:50.016+0000 I COMMAND [conn24] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:50.060+0000 D COMMAND [conn25] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:50.060+0000 D COMMAND [conn25] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:50.060+0000 I COMMAND [conn25] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:50.087+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:50.087+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:50.087+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:52.087Z 2015-04-01T16:22:50.573+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:50.573+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:22:50.574+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:50.574+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:51.339+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:51.343+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:51.350+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:53.350Z 2015-04-01T16:22:51.661+0000 D REPL [rsBackgroundSync] bgsync buffer has 29366751 bytes 2015-04-01T16:22:52.088+0000 D COMMAND [conn24] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:52.088+0000 D COMMAND [conn24] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:52.089+0000 D COMMAND [conn25] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:52.089+0000 D COMMAND [conn25] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:52.094+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:52.094+0000 I COMMAND [conn24] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 5ms 2015-04-01T16:22:52.095+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:52.095+0000 I COMMAND [conn25] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 6ms 2015-04-01T16:22:52.096+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:54.096Z 2015-04-01T16:22:52.465+0000 D REPL [rsBackgroundSync] bgsync buffer has 50339915 bytes 2015-04-01T16:22:52.465+0000 D REPL [rsBackgroundSync] bgsync buffer has 50341625 bytes 2015-04-01T16:22:52.465+0000 D REPL [rsBackgroundSync] bgsync buffer has 50343335 bytes 2015-04-01T16:22:52.469+0000 D REPL [rsBackgroundSync] bgsync buffer has 50345045 bytes 2015-04-01T16:22:52.469+0000 D REPL [rsBackgroundSync] bgsync buffer has 50346755 bytes 2015-04-01T16:22:52.473+0000 D REPL [rsBackgroundSync] bgsync buffer has 50348465 bytes 2015-04-01T16:22:52.477+0000 D REPL [rsBackgroundSync] bgsync buffer has 50350175 bytes 2015-04-01T16:22:52.477+0000 D REPL [rsBackgroundSync] bgsync buffer has 50351885 bytes 2015-04-01T16:22:52.484+0000 D REPL [rsBackgroundSync] bgsync buffer has 50353595 bytes 2015-04-01T16:22:52.484+0000 D REPL [rsBackgroundSync] bgsync buffer has 50355305 bytes 2015-04-01T16:22:52.489+0000 D REPL [rsBackgroundSync] bgsync buffer has 50357015 bytes 2015-04-01T16:22:52.489+0000 D REPL [rsBackgroundSync] bgsync buffer has 50358725 bytes 2015-04-01T16:22:52.489+0000 D REPL [rsBackgroundSync] bgsync buffer has 50360435 bytes 2015-04-01T16:22:52.496+0000 D REPL [rsBackgroundSync] bgsync buffer has 50362145 bytes 2015-04-01T16:22:52.496+0000 D REPL [rsBackgroundSync] bgsync buffer has 50363855 bytes 2015-04-01T16:22:52.496+0000 D REPL [rsBackgroundSync] bgsync buffer has 50365565 bytes 2015-04-01T16:22:52.501+0000 D REPL [rsBackgroundSync] bgsync buffer has 50367275 bytes 2015-04-01T16:22:52.501+0000 D REPL [rsBackgroundSync] bgsync buffer has 50368985 bytes 2015-04-01T16:22:52.508+0000 D REPL [rsBackgroundSync] bgsync buffer has 50370695 bytes 2015-04-01T16:22:52.508+0000 D REPL [rsBackgroundSync] bgsync buffer has 50372405 bytes 2015-04-01T16:22:52.509+0000 D REPL [rsBackgroundSync] bgsync buffer has 50374115 bytes 2015-04-01T16:22:52.515+0000 D REPL [rsBackgroundSync] bgsync buffer has 50375825 bytes 2015-04-01T16:22:52.515+0000 D REPL [rsBackgroundSync] bgsync buffer has 50377535 bytes 2015-04-01T16:22:52.515+0000 D REPL [rsBackgroundSync] bgsync buffer has 50379245 bytes 2015-04-01T16:22:52.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 50380955 bytes 2015-04-01T16:22:52.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 50382665 bytes 2015-04-01T16:22:52.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 50384375 bytes 2015-04-01T16:22:52.536+0000 D REPL [rsBackgroundSync] bgsync buffer has 50386085 bytes 2015-04-01T16:22:52.536+0000 D REPL [rsBackgroundSync] bgsync buffer has 50387795 bytes 2015-04-01T16:22:52.536+0000 D REPL [rsBackgroundSync] bgsync buffer has 50389505 bytes 2015-04-01T16:22:52.536+0000 D REPL [rsBackgroundSync] bgsync buffer has 50391215 bytes 2015-04-01T16:22:52.536+0000 D REPL [rsBackgroundSync] bgsync buffer has 50392925 bytes 2015-04-01T16:22:52.540+0000 D REPL [rsBackgroundSync] bgsync buffer has 50394635 bytes 2015-04-01T16:22:52.547+0000 D REPL [rsBackgroundSync] bgsync buffer has 50396345 bytes 2015-04-01T16:22:52.547+0000 D REPL [rsBackgroundSync] bgsync buffer has 50398055 bytes 2015-04-01T16:22:52.554+0000 D REPL [rsBackgroundSync] bgsync buffer has 50399765 bytes 2015-04-01T16:22:52.554+0000 D REPL [rsBackgroundSync] bgsync buffer has 50401475 bytes 2015-04-01T16:22:52.560+0000 D REPL [rsBackgroundSync] bgsync buffer has 50403185 bytes 2015-04-01T16:22:52.560+0000 D REPL [rsBackgroundSync] bgsync buffer has 50404895 bytes 2015-04-01T16:22:52.560+0000 D REPL [rsBackgroundSync] bgsync buffer has 50406605 bytes 2015-04-01T16:22:52.566+0000 D REPL [rsBackgroundSync] bgsync buffer has 50408315 bytes 2015-04-01T16:22:52.567+0000 D REPL [rsBackgroundSync] bgsync buffer has 50410025 bytes 2015-04-01T16:22:52.570+0000 D REPL [rsBackgroundSync] bgsync buffer has 50411735 bytes 2015-04-01T16:22:52.574+0000 D REPL [rsBackgroundSync] bgsync buffer has 50413445 bytes 2015-04-01T16:22:52.580+0000 D REPL [rsBackgroundSync] bgsync buffer has 50415155 bytes 2015-04-01T16:22:52.580+0000 D REPL [rsBackgroundSync] bgsync buffer has 50416865 bytes 2015-04-01T16:22:52.580+0000 D REPL [rsBackgroundSync] bgsync buffer has 50418575 bytes 2015-04-01T16:22:52.585+0000 D REPL [rsBackgroundSync] bgsync buffer has 50420285 bytes 2015-04-01T16:22:52.585+0000 D REPL [rsBackgroundSync] bgsync buffer has 50421995 bytes 2015-04-01T16:22:52.590+0000 D REPL [rsBackgroundSync] bgsync buffer has 50423705 bytes 2015-04-01T16:22:52.595+0000 D REPL [rsBackgroundSync] bgsync buffer has 50425415 bytes 2015-04-01T16:22:52.595+0000 D REPL [rsBackgroundSync] bgsync buffer has 50427125 bytes 2015-04-01T16:22:52.600+0000 D REPL [rsBackgroundSync] bgsync buffer has 50428835 bytes 2015-04-01T16:22:52.604+0000 D REPL [rsBackgroundSync] bgsync buffer has 50430545 bytes 2015-04-01T16:22:52.604+0000 D REPL [rsBackgroundSync] bgsync buffer has 50432255 bytes 2015-04-01T16:22:52.608+0000 D REPL [rsBackgroundSync] bgsync buffer has 50433965 bytes 2015-04-01T16:22:52.611+0000 D REPL [rsBackgroundSync] bgsync buffer has 50435675 bytes 2015-04-01T16:22:52.615+0000 D REPL [rsBackgroundSync] bgsync buffer has 50437385 bytes 2015-04-01T16:22:52.618+0000 D REPL [rsBackgroundSync] bgsync buffer has 50439095 bytes 2015-04-01T16:22:52.618+0000 D REPL [rsBackgroundSync] bgsync buffer has 50440805 bytes 2015-04-01T16:22:52.622+0000 D REPL [rsBackgroundSync] bgsync buffer has 50442515 bytes 2015-04-01T16:22:52.625+0000 D REPL [rsBackgroundSync] bgsync buffer has 50444225 bytes 2015-04-01T16:22:52.629+0000 D REPL [rsBackgroundSync] bgsync buffer has 50445935 bytes 2015-04-01T16:22:52.633+0000 D REPL [rsBackgroundSync] bgsync buffer has 50447645 bytes 2015-04-01T16:22:52.637+0000 D REPL [rsBackgroundSync] bgsync buffer has 50449355 bytes 2015-04-01T16:22:52.637+0000 D REPL [rsBackgroundSync] bgsync buffer has 50451065 bytes 2015-04-01T16:22:52.640+0000 D REPL [rsBackgroundSync] bgsync buffer has 50452775 bytes 2015-04-01T16:22:52.657+0000 D REPL [rsBackgroundSync] bgsync buffer has 50454449 bytes 2015-04-01T16:22:52.662+0000 D REPL [rsBackgroundSync] bgsync buffer has 50456114 bytes 2015-04-01T16:22:52.666+0000 D REPL [rsBackgroundSync] bgsync buffer has 50457779 bytes 2015-04-01T16:22:52.669+0000 D REPL [rsBackgroundSync] bgsync buffer has 50459444 bytes 2015-04-01T16:22:52.673+0000 D REPL [rsBackgroundSync] bgsync buffer has 50461109 bytes 2015-04-01T16:22:52.680+0000 D REPL [rsBackgroundSync] bgsync buffer has 50462774 bytes 2015-04-01T16:22:52.683+0000 D REPL [rsBackgroundSync] bgsync buffer has 50464439 bytes 2015-04-01T16:22:52.686+0000 D REPL [rsBackgroundSync] bgsync buffer has 50466104 bytes 2015-04-01T16:22:52.689+0000 D REPL [rsBackgroundSync] bgsync buffer has 50467769 bytes 2015-04-01T16:22:52.697+0000 D REPL [rsBackgroundSync] bgsync buffer has 50469434 bytes 2015-04-01T16:22:52.700+0000 D REPL [rsBackgroundSync] bgsync buffer has 50471099 bytes 2015-04-01T16:22:52.703+0000 D REPL [rsBackgroundSync] bgsync buffer has 50472764 bytes 2015-04-01T16:22:52.707+0000 D REPL [rsBackgroundSync] bgsync buffer has 50474429 bytes 2015-04-01T16:22:52.710+0000 D REPL [rsBackgroundSync] bgsync buffer has 50476094 bytes 2015-04-01T16:22:52.716+0000 D REPL [rsBackgroundSync] bgsync buffer has 50477759 bytes 2015-04-01T16:22:52.719+0000 D REPL [rsBackgroundSync] bgsync buffer has 50479424 bytes 2015-04-01T16:22:52.725+0000 D REPL [rsBackgroundSync] bgsync buffer has 50481089 bytes 2015-04-01T16:22:52.728+0000 D REPL [rsBackgroundSync] bgsync buffer has 50482754 bytes 2015-04-01T16:22:52.732+0000 D REPL [rsBackgroundSync] bgsync buffer has 50484419 bytes 2015-04-01T16:22:52.735+0000 D REPL [rsBackgroundSync] bgsync buffer has 50486084 bytes 2015-04-01T16:22:52.741+0000 D REPL [rsBackgroundSync] bgsync buffer has 50487749 bytes 2015-04-01T16:22:52.744+0000 D REPL [rsBackgroundSync] bgsync buffer has 50489414 bytes 2015-04-01T16:22:52.750+0000 D REPL [rsBackgroundSync] bgsync buffer has 50491079 bytes 2015-04-01T16:22:52.753+0000 D REPL [rsBackgroundSync] bgsync buffer has 50492744 bytes 2015-04-01T16:22:52.759+0000 D REPL [rsBackgroundSync] bgsync buffer has 50494409 bytes 2015-04-01T16:22:52.762+0000 D REPL [rsBackgroundSync] bgsync buffer has 50496074 bytes 2015-04-01T16:22:52.768+0000 D REPL [rsBackgroundSync] bgsync buffer has 50497739 bytes 2015-04-01T16:22:52.771+0000 D REPL [rsBackgroundSync] bgsync buffer has 50499404 bytes 2015-04-01T16:22:52.778+0000 D REPL [rsBackgroundSync] bgsync buffer has 50501069 bytes 2015-04-01T16:22:52.782+0000 D REPL [rsBackgroundSync] bgsync buffer has 50502734 bytes 2015-04-01T16:22:52.789+0000 D REPL [rsBackgroundSync] bgsync buffer has 50504399 bytes 2015-04-01T16:22:52.792+0000 D REPL [rsBackgroundSync] bgsync buffer has 50506064 bytes 2015-04-01T16:22:52.798+0000 D REPL [rsBackgroundSync] bgsync buffer has 50507729 bytes 2015-04-01T16:22:52.801+0000 D REPL [rsBackgroundSync] bgsync buffer has 50509394 bytes 2015-04-01T16:22:52.807+0000 D REPL [rsBackgroundSync] bgsync buffer has 50511059 bytes 2015-04-01T16:22:52.810+0000 D REPL [rsBackgroundSync] bgsync buffer has 50512724 bytes 2015-04-01T16:22:52.816+0000 D REPL [rsBackgroundSync] bgsync buffer has 50514389 bytes 2015-04-01T16:22:52.819+0000 D REPL [rsBackgroundSync] bgsync buffer has 50516054 bytes 2015-04-01T16:22:52.825+0000 D REPL [rsBackgroundSync] bgsync buffer has 50517719 bytes 2015-04-01T16:22:52.828+0000 D REPL [rsBackgroundSync] bgsync buffer has 50519384 bytes 2015-04-01T16:22:52.834+0000 D REPL [rsBackgroundSync] bgsync buffer has 50521049 bytes 2015-04-01T16:22:52.837+0000 D REPL [rsBackgroundSync] bgsync buffer has 50522714 bytes 2015-04-01T16:22:52.843+0000 D REPL [rsBackgroundSync] bgsync buffer has 50524379 bytes 2015-04-01T16:22:52.846+0000 D REPL [rsBackgroundSync] bgsync buffer has 50526044 bytes 2015-04-01T16:22:52.852+0000 D REPL [rsBackgroundSync] bgsync buffer has 50527709 bytes 2015-04-01T16:22:52.855+0000 D REPL [rsBackgroundSync] bgsync buffer has 50529374 bytes 2015-04-01T16:22:52.861+0000 D REPL [rsBackgroundSync] bgsync buffer has 50531039 bytes 2015-04-01T16:22:52.864+0000 D REPL [rsBackgroundSync] bgsync buffer has 50532704 bytes 2015-04-01T16:22:52.870+0000 D REPL [rsBackgroundSync] bgsync buffer has 50534369 bytes 2015-04-01T16:22:52.873+0000 D REPL [rsBackgroundSync] bgsync buffer has 50536034 bytes 2015-04-01T16:22:52.879+0000 D REPL [rsBackgroundSync] bgsync buffer has 50537699 bytes 2015-04-01T16:22:52.883+0000 D REPL [rsBackgroundSync] bgsync buffer has 50539364 bytes 2015-04-01T16:22:52.889+0000 D REPL [rsBackgroundSync] bgsync buffer has 50541029 bytes 2015-04-01T16:22:52.892+0000 D REPL [rsBackgroundSync] bgsync buffer has 50542694 bytes 2015-04-01T16:22:52.898+0000 D REPL [rsBackgroundSync] bgsync buffer has 50544359 bytes 2015-04-01T16:22:52.901+0000 D REPL [rsBackgroundSync] bgsync buffer has 50546024 bytes 2015-04-01T16:22:52.907+0000 D REPL [rsBackgroundSync] bgsync buffer has 50547689 bytes 2015-04-01T16:22:52.910+0000 D REPL [rsBackgroundSync] bgsync buffer has 50549354 bytes 2015-04-01T16:22:52.916+0000 D REPL [rsBackgroundSync] bgsync buffer has 50551019 bytes 2015-04-01T16:22:52.919+0000 D REPL [rsBackgroundSync] bgsync buffer has 50552684 bytes 2015-04-01T16:22:52.930+0000 D REPL [rsBackgroundSync] bgsync buffer has 50554349 bytes 2015-04-01T16:22:52.930+0000 D REPL [rsBackgroundSync] bgsync buffer has 50556014 bytes 2015-04-01T16:22:52.935+0000 D REPL [rsBackgroundSync] bgsync buffer has 50557679 bytes 2015-04-01T16:22:52.938+0000 D REPL [rsBackgroundSync] bgsync buffer has 50559344 bytes 2015-04-01T16:22:52.941+0000 D REPL [rsBackgroundSync] bgsync buffer has 50561009 bytes 2015-04-01T16:22:52.947+0000 D REPL [rsBackgroundSync] bgsync buffer has 50562674 bytes 2015-04-01T16:22:52.961+0000 D REPL [rsBackgroundSync] bgsync buffer has 50564329 bytes 2015-04-01T16:22:52.967+0000 D REPL [rsBackgroundSync] bgsync buffer has 50566039 bytes 2015-04-01T16:22:52.967+0000 D REPL [rsBackgroundSync] bgsync buffer has 50567749 bytes 2015-04-01T16:22:52.967+0000 D REPL [rsBackgroundSync] bgsync buffer has 50569459 bytes 2015-04-01T16:22:52.972+0000 D REPL [rsBackgroundSync] bgsync buffer has 50571169 bytes 2015-04-01T16:22:52.972+0000 D REPL [rsBackgroundSync] bgsync buffer has 50572879 bytes 2015-04-01T16:22:52.977+0000 D REPL [rsBackgroundSync] bgsync buffer has 50574589 bytes 2015-04-01T16:22:52.980+0000 D REPL [rsBackgroundSync] bgsync buffer has 50576299 bytes 2015-04-01T16:22:52.983+0000 D REPL [rsBackgroundSync] bgsync buffer has 50578009 bytes 2015-04-01T16:22:52.986+0000 D REPL [rsBackgroundSync] bgsync buffer has 50579719 bytes 2015-04-01T16:22:52.989+0000 D REPL [rsBackgroundSync] bgsync buffer has 50581429 bytes 2015-04-01T16:22:52.989+0000 D REPL [rsBackgroundSync] bgsync buffer has 50583139 bytes 2015-04-01T16:22:52.992+0000 D REPL [rsBackgroundSync] bgsync buffer has 50584849 bytes 2015-04-01T16:22:52.997+0000 D REPL [rsBackgroundSync] bgsync buffer has 50586559 bytes 2015-04-01T16:22:53.013+0000 D REPL [rsBackgroundSync] bgsync buffer has 50588269 bytes 2015-04-01T16:22:53.016+0000 D REPL [rsBackgroundSync] bgsync buffer has 50589979 bytes 2015-04-01T16:22:53.020+0000 D REPL [rsBackgroundSync] bgsync buffer has 50591689 bytes 2015-04-01T16:22:53.025+0000 D REPL [rsBackgroundSync] bgsync buffer has 50593399 bytes 2015-04-01T16:22:53.025+0000 D REPL [rsBackgroundSync] bgsync buffer has 50595109 bytes 2015-04-01T16:22:53.028+0000 D REPL [rsBackgroundSync] bgsync buffer has 50596819 bytes 2015-04-01T16:22:53.031+0000 D REPL [rsBackgroundSync] bgsync buffer has 50598529 bytes 2015-04-01T16:22:53.036+0000 D REPL [rsBackgroundSync] bgsync buffer has 50600239 bytes 2015-04-01T16:22:53.036+0000 D REPL [rsBackgroundSync] bgsync buffer has 50601949 bytes 2015-04-01T16:22:53.039+0000 D REPL [rsBackgroundSync] bgsync buffer has 50603659 bytes 2015-04-01T16:22:53.042+0000 D REPL [rsBackgroundSync] bgsync buffer has 50605369 bytes 2015-04-01T16:22:53.045+0000 D REPL [rsBackgroundSync] bgsync buffer has 50607079 bytes 2015-04-01T16:22:53.052+0000 D REPL [rsBackgroundSync] bgsync buffer has 50608789 bytes 2015-04-01T16:22:53.052+0000 D REPL [rsBackgroundSync] bgsync buffer has 50610499 bytes 2015-04-01T16:22:53.058+0000 D REPL [rsBackgroundSync] bgsync buffer has 50612209 bytes 2015-04-01T16:22:53.058+0000 D REPL [rsBackgroundSync] bgsync buffer has 50613919 bytes 2015-04-01T16:22:53.061+0000 D REPL [rsBackgroundSync] bgsync buffer has 50615629 bytes 2015-04-01T16:22:53.065+0000 D REPL [rsBackgroundSync] bgsync buffer has 50617339 bytes 2015-04-01T16:22:53.068+0000 D REPL [rsBackgroundSync] bgsync buffer has 50619049 bytes 2015-04-01T16:22:53.072+0000 D REPL [rsBackgroundSync] bgsync buffer has 50620759 bytes 2015-04-01T16:22:53.072+0000 D REPL [rsBackgroundSync] bgsync buffer has 50622469 bytes 2015-04-01T16:22:53.077+0000 D REPL [rsBackgroundSync] bgsync buffer has 50624179 bytes 2015-04-01T16:22:53.077+0000 D REPL [rsBackgroundSync] bgsync buffer has 50625889 bytes 2015-04-01T16:22:53.083+0000 D REPL [rsBackgroundSync] bgsync buffer has 50627599 bytes 2015-04-01T16:22:53.086+0000 D REPL [rsBackgroundSync] bgsync buffer has 50629309 bytes 2015-04-01T16:22:53.089+0000 D REPL [rsBackgroundSync] bgsync buffer has 50631019 bytes 2015-04-01T16:22:53.092+0000 D REPL [rsBackgroundSync] bgsync buffer has 50632729 bytes 2015-04-01T16:22:53.095+0000 D REPL [rsBackgroundSync] bgsync buffer has 50634439 bytes 2015-04-01T16:22:53.099+0000 D REPL [rsBackgroundSync] bgsync buffer has 50636149 bytes 2015-04-01T16:22:53.102+0000 D REPL [rsBackgroundSync] bgsync buffer has 50637859 bytes 2015-04-01T16:22:53.106+0000 D REPL [rsBackgroundSync] bgsync buffer has 50639569 bytes 2015-04-01T16:22:53.109+0000 D REPL [rsBackgroundSync] bgsync buffer has 50641279 bytes 2015-04-01T16:22:53.113+0000 D REPL [rsBackgroundSync] bgsync buffer has 50642989 bytes 2015-04-01T16:22:53.113+0000 D REPL [rsBackgroundSync] bgsync buffer has 50644699 bytes 2015-04-01T16:22:53.118+0000 D REPL [rsBackgroundSync] bgsync buffer has 50646409 bytes 2015-04-01T16:22:53.118+0000 D REPL [rsBackgroundSync] bgsync buffer has 50648119 bytes 2015-04-01T16:22:53.121+0000 D REPL [rsBackgroundSync] bgsync buffer has 50649829 bytes 2015-04-01T16:22:53.126+0000 D REPL [rsBackgroundSync] bgsync buffer has 50651539 bytes 2015-04-01T16:22:53.132+0000 D REPL [rsBackgroundSync] bgsync buffer has 50653249 bytes 2015-04-01T16:22:53.132+0000 D REPL [rsBackgroundSync] bgsync buffer has 50654959 bytes 2015-04-01T16:22:53.132+0000 D REPL [rsBackgroundSync] bgsync buffer has 50656669 bytes 2015-04-01T16:22:53.135+0000 D REPL [rsBackgroundSync] bgsync buffer has 50658379 bytes 2015-04-01T16:22:53.141+0000 D REPL [rsBackgroundSync] bgsync buffer has 50660089 bytes 2015-04-01T16:22:53.144+0000 D REPL [rsBackgroundSync] bgsync buffer has 50661799 bytes 2015-04-01T16:22:53.148+0000 D REPL [rsBackgroundSync] bgsync buffer has 50663509 bytes 2015-04-01T16:22:53.153+0000 D REPL [rsBackgroundSync] bgsync buffer has 50665219 bytes 2015-04-01T16:22:53.153+0000 D REPL [rsBackgroundSync] bgsync buffer has 50666929 bytes 2015-04-01T16:22:53.156+0000 D REPL [rsBackgroundSync] bgsync buffer has 50668639 bytes 2015-04-01T16:22:53.160+0000 D REPL [rsBackgroundSync] bgsync buffer has 50670349 bytes 2015-04-01T16:22:53.164+0000 D REPL [rsBackgroundSync] bgsync buffer has 50672059 bytes 2015-04-01T16:22:53.164+0000 D REPL [rsBackgroundSync] bgsync buffer has 50673769 bytes 2015-04-01T16:22:53.169+0000 D REPL [rsBackgroundSync] bgsync buffer has 50675479 bytes 2015-04-01T16:22:53.172+0000 D REPL [rsBackgroundSync] bgsync buffer has 50677189 bytes 2015-04-01T16:22:53.186+0000 D REPL [rsBackgroundSync] bgsync buffer has 50678887 bytes 2015-04-01T16:22:53.189+0000 D REPL [rsBackgroundSync] bgsync buffer has 50680552 bytes 2015-04-01T16:22:53.193+0000 D REPL [rsBackgroundSync] bgsync buffer has 50682217 bytes 2015-04-01T16:22:53.196+0000 D REPL [rsBackgroundSync] bgsync buffer has 50683882 bytes 2015-04-01T16:22:53.203+0000 D REPL [rsBackgroundSync] bgsync buffer has 50685547 bytes 2015-04-01T16:22:53.206+0000 D REPL [rsBackgroundSync] bgsync buffer has 50687212 bytes 2015-04-01T16:22:53.209+0000 D REPL [rsBackgroundSync] bgsync buffer has 50688877 bytes 2015-04-01T16:22:53.215+0000 D REPL [rsBackgroundSync] bgsync buffer has 50690542 bytes 2015-04-01T16:22:53.222+0000 D REPL [rsBackgroundSync] bgsync buffer has 50692207 bytes 2015-04-01T16:22:53.222+0000 D REPL [rsBackgroundSync] bgsync buffer has 50693872 bytes 2015-04-01T16:22:53.226+0000 D REPL [rsBackgroundSync] bgsync buffer has 50695537 bytes 2015-04-01T16:22:53.233+0000 D REPL [rsBackgroundSync] bgsync buffer has 50697202 bytes 2015-04-01T16:22:53.233+0000 D REPL [rsBackgroundSync] bgsync buffer has 50698867 bytes 2015-04-01T16:22:53.239+0000 D REPL [rsBackgroundSync] bgsync buffer has 50700532 bytes 2015-04-01T16:22:53.242+0000 D REPL [rsBackgroundSync] bgsync buffer has 50702197 bytes 2015-04-01T16:22:53.247+0000 D REPL [rsBackgroundSync] bgsync buffer has 50703862 bytes 2015-04-01T16:22:53.250+0000 D REPL [rsBackgroundSync] bgsync buffer has 50705527 bytes 2015-04-01T16:22:53.256+0000 D REPL [rsBackgroundSync] bgsync buffer has 50707192 bytes 2015-04-01T16:22:53.259+0000 D REPL [rsBackgroundSync] bgsync buffer has 50708857 bytes 2015-04-01T16:22:53.266+0000 D REPL [rsBackgroundSync] bgsync buffer has 50710522 bytes 2015-04-01T16:22:53.272+0000 D REPL [rsBackgroundSync] bgsync buffer has 50712187 bytes 2015-04-01T16:22:53.275+0000 D REPL [rsBackgroundSync] bgsync buffer has 50713852 bytes 2015-04-01T16:22:53.281+0000 D REPL [rsBackgroundSync] bgsync buffer has 50715517 bytes 2015-04-01T16:22:53.287+0000 D REPL [rsBackgroundSync] bgsync buffer has 50717182 bytes 2015-04-01T16:22:53.290+0000 D REPL [rsBackgroundSync] bgsync buffer has 50718847 bytes 2015-04-01T16:22:53.296+0000 D REPL [rsBackgroundSync] bgsync buffer has 50720512 bytes 2015-04-01T16:22:53.299+0000 D REPL [rsBackgroundSync] bgsync buffer has 50722177 bytes 2015-04-01T16:22:53.302+0000 D REPL [rsBackgroundSync] bgsync buffer has 50723842 bytes 2015-04-01T16:22:53.309+0000 D REPL [rsBackgroundSync] bgsync buffer has 50725507 bytes 2015-04-01T16:22:53.315+0000 D REPL [rsBackgroundSync] bgsync buffer has 50727172 bytes 2015-04-01T16:22:53.318+0000 D REPL [rsBackgroundSync] bgsync buffer has 50728837 bytes 2015-04-01T16:22:53.324+0000 D REPL [rsBackgroundSync] bgsync buffer has 50730502 bytes 2015-04-01T16:22:53.327+0000 D REPL [rsBackgroundSync] bgsync buffer has 50732167 bytes 2015-04-01T16:22:53.333+0000 D REPL [rsBackgroundSync] bgsync buffer has 50733832 bytes 2015-04-01T16:22:53.336+0000 D REPL [rsBackgroundSync] bgsync buffer has 50735497 bytes 2015-04-01T16:22:53.343+0000 D REPL [rsBackgroundSync] bgsync buffer has 50737162 bytes 2015-04-01T16:22:53.346+0000 D REPL [rsBackgroundSync] bgsync buffer has 50738827 bytes 2015-04-01T16:22:53.350+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:53.351+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:53.352+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:55.352Z 2015-04-01T16:22:53.353+0000 D REPL [rsBackgroundSync] bgsync buffer has 50740492 bytes 2015-04-01T16:22:53.356+0000 D REPL [rsBackgroundSync] bgsync buffer has 50742157 bytes 2015-04-01T16:22:53.362+0000 D REPL [rsBackgroundSync] bgsync buffer has 50743822 bytes 2015-04-01T16:22:53.365+0000 D REPL [rsBackgroundSync] bgsync buffer has 50745487 bytes 2015-04-01T16:22:53.371+0000 D REPL [rsBackgroundSync] bgsync buffer has 50747152 bytes 2015-04-01T16:22:53.374+0000 D REPL [rsBackgroundSync] bgsync buffer has 50748817 bytes 2015-04-01T16:22:53.380+0000 D REPL [rsBackgroundSync] bgsync buffer has 50750482 bytes 2015-04-01T16:22:53.386+0000 D REPL [rsBackgroundSync] bgsync buffer has 50752147 bytes 2015-04-01T16:22:53.389+0000 D REPL [rsBackgroundSync] bgsync buffer has 50753812 bytes 2015-04-01T16:22:53.395+0000 D REPL [rsBackgroundSync] bgsync buffer has 50755477 bytes 2015-04-01T16:22:53.398+0000 D REPL [rsBackgroundSync] bgsync buffer has 50757142 bytes 2015-04-01T16:22:53.404+0000 D REPL [rsBackgroundSync] bgsync buffer has 50758807 bytes 2015-04-01T16:22:53.407+0000 D REPL [rsBackgroundSync] bgsync buffer has 50760472 bytes 2015-04-01T16:22:53.413+0000 D REPL [rsBackgroundSync] bgsync buffer has 50762137 bytes 2015-04-01T16:22:53.416+0000 D REPL [rsBackgroundSync] bgsync buffer has 50763802 bytes 2015-04-01T16:22:53.422+0000 D REPL [rsBackgroundSync] bgsync buffer has 50765467 bytes 2015-04-01T16:22:53.425+0000 D REPL [rsBackgroundSync] bgsync buffer has 50767132 bytes 2015-04-01T16:22:53.431+0000 D REPL [rsBackgroundSync] bgsync buffer has 50768797 bytes 2015-04-01T16:22:53.434+0000 D REPL [rsBackgroundSync] bgsync buffer has 50770462 bytes 2015-04-01T16:22:53.440+0000 D REPL [rsBackgroundSync] bgsync buffer has 50772127 bytes 2015-04-01T16:22:53.443+0000 D REPL [rsBackgroundSync] bgsync buffer has 50773792 bytes 2015-04-01T16:22:53.449+0000 D REPL [rsBackgroundSync] bgsync buffer has 50775457 bytes 2015-04-01T16:22:53.452+0000 D REPL [rsBackgroundSync] bgsync buffer has 50777122 bytes 2015-04-01T16:22:53.458+0000 D REPL [rsBackgroundSync] bgsync buffer has 50778787 bytes 2015-04-01T16:22:53.462+0000 D REPL [rsBackgroundSync] bgsync buffer has 50780452 bytes 2015-04-01T16:22:53.465+0000 D REPL [rsBackgroundSync] bgsync buffer has 50782117 bytes 2015-04-01T16:22:53.472+0000 D REPL [rsBackgroundSync] bgsync buffer has 50783782 bytes 2015-04-01T16:22:53.475+0000 D REPL [rsBackgroundSync] bgsync buffer has 50785447 bytes 2015-04-01T16:22:53.481+0000 D REPL [rsBackgroundSync] bgsync buffer has 50787112 bytes 2015-04-01T16:22:53.485+0000 D REPL [rsBackgroundSync] bgsync buffer has 50788777 bytes 2015-04-01T16:22:53.512+0000 D REPL [rsBackgroundSync] bgsync buffer has 50790450 bytes 2015-04-01T16:22:53.516+0000 D REPL [rsBackgroundSync] bgsync buffer has 50792160 bytes 2015-04-01T16:22:53.516+0000 D REPL [rsBackgroundSync] bgsync buffer has 50793870 bytes 2015-04-01T16:22:53.519+0000 D REPL [rsBackgroundSync] bgsync buffer has 50795580 bytes 2015-04-01T16:22:53.522+0000 D REPL [rsBackgroundSync] bgsync buffer has 50797290 bytes 2015-04-01T16:22:53.525+0000 D REPL [rsBackgroundSync] bgsync buffer has 50799000 bytes 2015-04-01T16:22:53.525+0000 D REPL [rsBackgroundSync] bgsync buffer has 50800710 bytes 2015-04-01T16:22:53.529+0000 D REPL [rsBackgroundSync] bgsync buffer has 50802420 bytes 2015-04-01T16:22:53.534+0000 D REPL [rsBackgroundSync] bgsync buffer has 50804130 bytes 2015-04-01T16:22:53.534+0000 D REPL [rsBackgroundSync] bgsync buffer has 50805840 bytes 2015-04-01T16:22:53.538+0000 D REPL [rsBackgroundSync] bgsync buffer has 50807550 bytes 2015-04-01T16:22:53.538+0000 D REPL [rsBackgroundSync] bgsync buffer has 50809260 bytes 2015-04-01T16:22:53.541+0000 D REPL [rsBackgroundSync] bgsync buffer has 50810970 bytes 2015-04-01T16:22:53.546+0000 D REPL [rsBackgroundSync] bgsync buffer has 50812680 bytes 2015-04-01T16:22:53.546+0000 D REPL [rsBackgroundSync] bgsync buffer has 50814390 bytes 2015-04-01T16:22:53.551+0000 D REPL [rsBackgroundSync] bgsync buffer has 50816100 bytes 2015-04-01T16:22:53.552+0000 D REPL [rsBackgroundSync] bgsync buffer has 50817810 bytes 2015-04-01T16:22:53.557+0000 D REPL [rsBackgroundSync] bgsync buffer has 50819520 bytes 2015-04-01T16:22:53.557+0000 D REPL [rsBackgroundSync] bgsync buffer has 50821230 bytes 2015-04-01T16:22:53.557+0000 D REPL [rsBackgroundSync] bgsync buffer has 50822940 bytes 2015-04-01T16:22:53.561+0000 D REPL [rsBackgroundSync] bgsync buffer has 50824650 bytes 2015-04-01T16:22:53.567+0000 D REPL [rsBackgroundSync] bgsync buffer has 50826360 bytes 2015-04-01T16:22:53.567+0000 D REPL [rsBackgroundSync] bgsync buffer has 50828070 bytes 2015-04-01T16:22:53.567+0000 D REPL [rsBackgroundSync] bgsync buffer has 50829780 bytes 2015-04-01T16:22:53.571+0000 D REPL [rsBackgroundSync] bgsync buffer has 50831490 bytes 2015-04-01T16:22:53.571+0000 D REPL [rsBackgroundSync] bgsync buffer has 50833200 bytes 2015-04-01T16:22:53.578+0000 D REPL [rsBackgroundSync] bgsync buffer has 50834910 bytes 2015-04-01T16:22:53.578+0000 D REPL [rsBackgroundSync] bgsync buffer has 50836620 bytes 2015-04-01T16:22:53.581+0000 D REPL [rsBackgroundSync] bgsync buffer has 50838330 bytes 2015-04-01T16:22:53.581+0000 D REPL [rsBackgroundSync] bgsync buffer has 50840040 bytes 2015-04-01T16:22:53.586+0000 D REPL [rsBackgroundSync] bgsync buffer has 50841750 bytes 2015-04-01T16:22:53.586+0000 D REPL [rsBackgroundSync] bgsync buffer has 50843460 bytes 2015-04-01T16:22:53.589+0000 D REPL [rsBackgroundSync] bgsync buffer has 50845170 bytes 2015-04-01T16:22:53.593+0000 D REPL [rsBackgroundSync] bgsync buffer has 50846880 bytes 2015-04-01T16:22:53.593+0000 D REPL [rsBackgroundSync] bgsync buffer has 50848590 bytes 2015-04-01T16:22:53.598+0000 D REPL [rsBackgroundSync] bgsync buffer has 50850300 bytes 2015-04-01T16:22:53.598+0000 D REPL [rsBackgroundSync] bgsync buffer has 50852010 bytes 2015-04-01T16:22:53.606+0000 D REPL [rsBackgroundSync] bgsync buffer has 50853720 bytes 2015-04-01T16:22:53.606+0000 D REPL [rsBackgroundSync] bgsync buffer has 50855430 bytes 2015-04-01T16:22:53.606+0000 D REPL [rsBackgroundSync] bgsync buffer has 50857140 bytes 2015-04-01T16:22:53.606+0000 D REPL [rsBackgroundSync] bgsync buffer has 50858850 bytes 2015-04-01T16:22:53.609+0000 D REPL [rsBackgroundSync] bgsync buffer has 50860560 bytes 2015-04-01T16:22:53.613+0000 D REPL [rsBackgroundSync] bgsync buffer has 50862270 bytes 2015-04-01T16:22:53.613+0000 D REPL [rsBackgroundSync] bgsync buffer has 50863980 bytes 2015-04-01T16:22:53.622+0000 D REPL [rsBackgroundSync] bgsync buffer has 50865690 bytes 2015-04-01T16:22:53.622+0000 D REPL [rsBackgroundSync] bgsync buffer has 50867400 bytes 2015-04-01T16:22:53.622+0000 D REPL [rsBackgroundSync] bgsync buffer has 50869110 bytes 2015-04-01T16:22:53.629+0000 D REPL [rsBackgroundSync] bgsync buffer has 50870820 bytes 2015-04-01T16:22:53.629+0000 D REPL [rsBackgroundSync] bgsync buffer has 50872530 bytes 2015-04-01T16:22:53.629+0000 D REPL [rsBackgroundSync] bgsync buffer has 50874240 bytes 2015-04-01T16:22:53.638+0000 D REPL [rsBackgroundSync] bgsync buffer has 50875950 bytes 2015-04-01T16:22:53.638+0000 D REPL [rsBackgroundSync] bgsync buffer has 50877660 bytes 2015-04-01T16:22:53.638+0000 D REPL [rsBackgroundSync] bgsync buffer has 50879370 bytes 2015-04-01T16:22:53.638+0000 D REPL [rsBackgroundSync] bgsync buffer has 50881080 bytes 2015-04-01T16:22:53.646+0000 D REPL [rsBackgroundSync] bgsync buffer has 50882790 bytes 2015-04-01T16:22:53.646+0000 D REPL [rsBackgroundSync] bgsync buffer has 50884500 bytes 2015-04-01T16:22:53.646+0000 D REPL [rsBackgroundSync] bgsync buffer has 50886210 bytes 2015-04-01T16:22:53.646+0000 D REPL [rsBackgroundSync] bgsync buffer has 50887920 bytes 2015-04-01T16:22:53.655+0000 D REPL [rsBackgroundSync] bgsync buffer has 50889630 bytes 2015-04-01T16:22:53.655+0000 D REPL [rsBackgroundSync] bgsync buffer has 50891340 bytes 2015-04-01T16:22:53.656+0000 D REPL [rsBackgroundSync] bgsync buffer has 50893050 bytes 2015-04-01T16:22:53.656+0000 D REPL [rsBackgroundSync] bgsync buffer has 50894760 bytes 2015-04-01T16:22:53.662+0000 D REPL [rsBackgroundSync] bgsync buffer has 50896470 bytes 2015-04-01T16:22:53.662+0000 D REPL [rsBackgroundSync] bgsync buffer has 50898180 bytes 2015-04-01T16:22:53.662+0000 D REPL [rsBackgroundSync] bgsync buffer has 50899890 bytes 2015-04-01T16:22:53.667+0000 D REPL [rsBackgroundSync] bgsync buffer has 50901600 bytes 2015-04-01T16:22:53.667+0000 D REPL [rsBackgroundSync] bgsync buffer has 50903310 bytes 2015-04-01T16:22:53.682+0000 D REPL [rsBackgroundSync] bgsync buffer has 50904984 bytes 2015-04-01T16:22:53.690+0000 D REPL [rsBackgroundSync] bgsync buffer has 50906649 bytes 2015-04-01T16:22:53.693+0000 D REPL [rsBackgroundSync] bgsync buffer has 50908314 bytes 2015-04-01T16:22:53.697+0000 D REPL [rsBackgroundSync] bgsync buffer has 50909979 bytes 2015-04-01T16:22:53.701+0000 D REPL [rsBackgroundSync] bgsync buffer has 50911644 bytes 2015-04-01T16:22:53.704+0000 D REPL [rsBackgroundSync] bgsync buffer has 50913309 bytes 2015-04-01T16:22:53.710+0000 D REPL [rsBackgroundSync] bgsync buffer has 50914974 bytes 2015-04-01T16:22:53.713+0000 D REPL [rsBackgroundSync] bgsync buffer has 50916639 bytes 2015-04-01T16:22:53.716+0000 D REPL [rsBackgroundSync] bgsync buffer has 50918304 bytes 2015-04-01T16:22:53.723+0000 D REPL [rsBackgroundSync] bgsync buffer has 50919969 bytes 2015-04-01T16:22:53.726+0000 D REPL [rsBackgroundSync] bgsync buffer has 50921634 bytes 2015-04-01T16:22:53.729+0000 D REPL [rsBackgroundSync] bgsync buffer has 50923299 bytes 2015-04-01T16:22:53.733+0000 D REPL [rsBackgroundSync] bgsync buffer has 50924964 bytes 2015-04-01T16:22:53.739+0000 D REPL [rsBackgroundSync] bgsync buffer has 50926629 bytes 2015-04-01T16:22:53.742+0000 D REPL [rsBackgroundSync] bgsync buffer has 50928294 bytes 2015-04-01T16:22:53.748+0000 D REPL [rsBackgroundSync] bgsync buffer has 50929959 bytes 2015-04-01T16:22:53.751+0000 D REPL [rsBackgroundSync] bgsync buffer has 50931624 bytes 2015-04-01T16:22:53.754+0000 D REPL [rsBackgroundSync] bgsync buffer has 50933289 bytes 2015-04-01T16:22:53.761+0000 D REPL [rsBackgroundSync] bgsync buffer has 50934954 bytes 2015-04-01T16:22:53.764+0000 D REPL [rsBackgroundSync] bgsync buffer has 50936619 bytes 2015-04-01T16:22:53.768+0000 D REPL [rsBackgroundSync] bgsync buffer has 50938284 bytes 2015-04-01T16:22:53.772+0000 D REPL [rsBackgroundSync] bgsync buffer has 50939949 bytes 2015-04-01T16:22:53.776+0000 D REPL [rsBackgroundSync] bgsync buffer has 50941614 bytes 2015-04-01T16:22:53.781+0000 D REPL [rsBackgroundSync] bgsync buffer has 50943279 bytes 2015-04-01T16:22:53.784+0000 D REPL [rsBackgroundSync] bgsync buffer has 50944944 bytes 2015-04-01T16:22:53.787+0000 D REPL [rsBackgroundSync] bgsync buffer has 50946609 bytes 2015-04-01T16:22:53.793+0000 D REPL [rsBackgroundSync] bgsync buffer has 50948274 bytes 2015-04-01T16:22:53.796+0000 D REPL [rsBackgroundSync] bgsync buffer has 50949939 bytes 2015-04-01T16:22:53.799+0000 D REPL [rsBackgroundSync] bgsync buffer has 50951604 bytes 2015-04-01T16:22:53.806+0000 D REPL [rsBackgroundSync] bgsync buffer has 50953269 bytes 2015-04-01T16:22:53.809+0000 D REPL [rsBackgroundSync] bgsync buffer has 50954934 bytes 2015-04-01T16:22:53.812+0000 D REPL [rsBackgroundSync] bgsync buffer has 50956599 bytes 2015-04-01T16:22:53.818+0000 D REPL [rsBackgroundSync] bgsync buffer has 50958264 bytes 2015-04-01T16:22:53.821+0000 D REPL [rsBackgroundSync] bgsync buffer has 50959929 bytes 2015-04-01T16:22:53.827+0000 D REPL [rsBackgroundSync] bgsync buffer has 50961594 bytes 2015-04-01T16:22:53.831+0000 D REPL [rsBackgroundSync] bgsync buffer has 50963259 bytes 2015-04-01T16:22:53.835+0000 D REPL [rsBackgroundSync] bgsync buffer has 50964924 bytes 2015-04-01T16:22:53.839+0000 D REPL [rsBackgroundSync] bgsync buffer has 50966589 bytes 2015-04-01T16:22:53.845+0000 D REPL [rsBackgroundSync] bgsync buffer has 50968254 bytes 2015-04-01T16:22:53.848+0000 D REPL [rsBackgroundSync] bgsync buffer has 50969919 bytes 2015-04-01T16:22:53.852+0000 D REPL [rsBackgroundSync] bgsync buffer has 50971584 bytes 2015-04-01T16:22:53.856+0000 D REPL [rsBackgroundSync] bgsync buffer has 50973249 bytes 2015-04-01T16:22:53.863+0000 D REPL [rsBackgroundSync] bgsync buffer has 50974914 bytes 2015-04-01T16:22:53.867+0000 D REPL [rsBackgroundSync] bgsync buffer has 50976579 bytes 2015-04-01T16:22:53.870+0000 D REPL [rsBackgroundSync] bgsync buffer has 50978244 bytes 2015-04-01T16:22:53.875+0000 D REPL [rsBackgroundSync] bgsync buffer has 50979909 bytes 2015-04-01T16:22:53.878+0000 D REPL [rsBackgroundSync] bgsync buffer has 50981574 bytes 2015-04-01T16:22:53.881+0000 D REPL [rsBackgroundSync] bgsync buffer has 50983239 bytes 2015-04-01T16:22:53.887+0000 D REPL [rsBackgroundSync] bgsync buffer has 50984904 bytes 2015-04-01T16:22:53.890+0000 D REPL [rsBackgroundSync] bgsync buffer has 50986569 bytes 2015-04-01T16:22:53.896+0000 D REPL [rsBackgroundSync] bgsync buffer has 50988234 bytes 2015-04-01T16:22:53.899+0000 D REPL [rsBackgroundSync] bgsync buffer has 50989899 bytes 2015-04-01T16:22:53.905+0000 D REPL [rsBackgroundSync] bgsync buffer has 50991564 bytes 2015-04-01T16:22:53.908+0000 D REPL [rsBackgroundSync] bgsync buffer has 50993229 bytes 2015-04-01T16:22:53.914+0000 D REPL [rsBackgroundSync] bgsync buffer has 50994894 bytes 2015-04-01T16:22:53.917+0000 D REPL [rsBackgroundSync] bgsync buffer has 50996559 bytes 2015-04-01T16:22:53.925+0000 D REPL [rsBackgroundSync] bgsync buffer has 50998224 bytes 2015-04-01T16:22:53.928+0000 D REPL [rsBackgroundSync] bgsync buffer has 50999889 bytes 2015-04-01T16:22:53.932+0000 D REPL [rsBackgroundSync] bgsync buffer has 51001554 bytes 2015-04-01T16:22:53.938+0000 D REPL [rsBackgroundSync] bgsync buffer has 51003219 bytes 2015-04-01T16:22:53.941+0000 D REPL [rsBackgroundSync] bgsync buffer has 51004884 bytes 2015-04-01T16:22:53.944+0000 D REPL [rsBackgroundSync] bgsync buffer has 51006549 bytes 2015-04-01T16:22:53.950+0000 D REPL [rsBackgroundSync] bgsync buffer has 51008214 bytes 2015-04-01T16:22:53.953+0000 D REPL [rsBackgroundSync] bgsync buffer has 51009879 bytes 2015-04-01T16:22:53.959+0000 D REPL [rsBackgroundSync] bgsync buffer has 51011544 bytes 2015-04-01T16:22:53.962+0000 D REPL [rsBackgroundSync] bgsync buffer has 51013209 bytes 2015-04-01T16:22:53.979+0000 D REPL [rsBackgroundSync] bgsync buffer has 51014867 bytes 2015-04-01T16:22:53.983+0000 D REPL [rsBackgroundSync] bgsync buffer has 51016577 bytes 2015-04-01T16:22:53.987+0000 D REPL [rsBackgroundSync] bgsync buffer has 51018287 bytes 2015-04-01T16:22:53.987+0000 D REPL [rsBackgroundSync] bgsync buffer has 51019997 bytes 2015-04-01T16:22:53.990+0000 D REPL [rsBackgroundSync] bgsync buffer has 51021707 bytes 2015-04-01T16:22:53.991+0000 D REPL [rsBackgroundSync] bgsync buffer has 51023417 bytes 2015-04-01T16:22:53.998+0000 D REPL [rsBackgroundSync] bgsync buffer has 51025127 bytes 2015-04-01T16:22:53.998+0000 D REPL [rsBackgroundSync] bgsync buffer has 51026837 bytes 2015-04-01T16:22:54.003+0000 D REPL [rsBackgroundSync] bgsync buffer has 51028547 bytes 2015-04-01T16:22:54.003+0000 D REPL [rsBackgroundSync] bgsync buffer has 51030257 bytes 2015-04-01T16:22:54.003+0000 D REPL [rsBackgroundSync] bgsync buffer has 51031967 bytes 2015-04-01T16:22:54.007+0000 D REPL [rsBackgroundSync] bgsync buffer has 51033677 bytes 2015-04-01T16:22:54.012+0000 D REPL [rsBackgroundSync] bgsync buffer has 51035387 bytes 2015-04-01T16:22:54.012+0000 D REPL [rsBackgroundSync] bgsync buffer has 51037097 bytes 2015-04-01T16:22:54.012+0000 D REPL [rsBackgroundSync] bgsync buffer has 51038807 bytes 2015-04-01T16:22:54.016+0000 D REPL [rsBackgroundSync] bgsync buffer has 51040517 bytes 2015-04-01T16:22:54.021+0000 D REPL [rsBackgroundSync] bgsync buffer has 51042227 bytes 2015-04-01T16:22:54.021+0000 D REPL [rsBackgroundSync] bgsync buffer has 51043937 bytes 2015-04-01T16:22:54.025+0000 D REPL [rsBackgroundSync] bgsync buffer has 51045647 bytes 2015-04-01T16:22:54.025+0000 D REPL [rsBackgroundSync] bgsync buffer has 51047357 bytes 2015-04-01T16:22:54.031+0000 D REPL [rsBackgroundSync] bgsync buffer has 51049067 bytes 2015-04-01T16:22:54.031+0000 D REPL [rsBackgroundSync] bgsync buffer has 51050777 bytes 2015-04-01T16:22:54.037+0000 D REPL [rsBackgroundSync] bgsync buffer has 51052487 bytes 2015-04-01T16:22:54.037+0000 D REPL [rsBackgroundSync] bgsync buffer has 51054197 bytes 2015-04-01T16:22:54.037+0000 D REPL [rsBackgroundSync] bgsync buffer has 51055907 bytes 2015-04-01T16:22:54.043+0000 D REPL [rsBackgroundSync] bgsync buffer has 51057617 bytes 2015-04-01T16:22:54.044+0000 D REPL [rsBackgroundSync] bgsync buffer has 51059327 bytes 2015-04-01T16:22:54.044+0000 D REPL [rsBackgroundSync] bgsync buffer has 51061037 bytes 2015-04-01T16:22:54.048+0000 D REPL [rsBackgroundSync] bgsync buffer has 51062747 bytes 2015-04-01T16:22:54.054+0000 D REPL [rsBackgroundSync] bgsync buffer has 51064457 bytes 2015-04-01T16:22:54.054+0000 D REPL [rsBackgroundSync] bgsync buffer has 51066167 bytes 2015-04-01T16:22:54.054+0000 D REPL [rsBackgroundSync] bgsync buffer has 51067877 bytes 2015-04-01T16:22:54.061+0000 D REPL [rsBackgroundSync] bgsync buffer has 51069587 bytes 2015-04-01T16:22:54.061+0000 D REPL [rsBackgroundSync] bgsync buffer has 51071297 bytes 2015-04-01T16:22:54.061+0000 D REPL [rsBackgroundSync] bgsync buffer has 51073007 bytes 2015-04-01T16:22:54.068+0000 D REPL [rsBackgroundSync] bgsync buffer has 51074717 bytes 2015-04-01T16:22:54.068+0000 D REPL [rsBackgroundSync] bgsync buffer has 51076427 bytes 2015-04-01T16:22:54.068+0000 D REPL [rsBackgroundSync] bgsync buffer has 51078137 bytes 2015-04-01T16:22:54.073+0000 D REPL [rsBackgroundSync] bgsync buffer has 51079847 bytes 2015-04-01T16:22:54.073+0000 D REPL [rsBackgroundSync] bgsync buffer has 51081557 bytes 2015-04-01T16:22:54.080+0000 D REPL [rsBackgroundSync] bgsync buffer has 51083267 bytes 2015-04-01T16:22:54.080+0000 D REPL [rsBackgroundSync] bgsync buffer has 51084977 bytes 2015-04-01T16:22:54.080+0000 D REPL [rsBackgroundSync] bgsync buffer has 51086687 bytes 2015-04-01T16:22:54.083+0000 D REPL [rsBackgroundSync] bgsync buffer has 51088397 bytes 2015-04-01T16:22:54.083+0000 D REPL [rsBackgroundSync] bgsync buffer has 51090107 bytes 2015-04-01T16:22:54.089+0000 D REPL [rsBackgroundSync] bgsync buffer has 51091817 bytes 2015-04-01T16:22:54.089+0000 D REPL [rsBackgroundSync] bgsync buffer has 51093527 bytes 2015-04-01T16:22:54.093+0000 D REPL [rsBackgroundSync] bgsync buffer has 51095237 bytes 2015-04-01T16:22:54.094+0000 D REPL [rsBackgroundSync] bgsync buffer has 51096947 bytes 2015-04-01T16:22:54.097+0000 D COMMAND [conn25] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:54.097+0000 D COMMAND [conn25] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:54.098+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:54.098+0000 I COMMAND [conn25] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:54.098+0000 D REPL [rsBackgroundSync] bgsync buffer has 51098657 bytes 2015-04-01T16:22:54.101+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:22:54.102+0000 D REPL [rsBackgroundSync] bgsync buffer has 51100367 bytes 2015-04-01T16:22:54.102+0000 D REPL [rsBackgroundSync] bgsync buffer has 51102077 bytes 2015-04-01T16:22:54.104+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:54.110+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:56.110Z 2015-04-01T16:22:54.111+0000 D REPL [rsBackgroundSync] bgsync buffer has 51103787 bytes 2015-04-01T16:22:54.111+0000 D REPL [rsBackgroundSync] bgsync buffer has 51105497 bytes 2015-04-01T16:22:54.111+0000 D REPL [rsBackgroundSync] bgsync buffer has 51107207 bytes 2015-04-01T16:22:54.117+0000 D REPL [rsBackgroundSync] bgsync buffer has 51108917 bytes 2015-04-01T16:22:54.117+0000 D REPL [rsBackgroundSync] bgsync buffer has 51110627 bytes 2015-04-01T16:22:54.117+0000 D REPL [rsBackgroundSync] bgsync buffer has 51112337 bytes 2015-04-01T16:22:54.123+0000 D REPL [rsBackgroundSync] bgsync buffer has 51114047 bytes 2015-04-01T16:22:54.124+0000 D REPL [rsBackgroundSync] bgsync buffer has 51115757 bytes 2015-04-01T16:22:54.129+0000 D REPL [rsBackgroundSync] bgsync buffer has 51117467 bytes 2015-04-01T16:22:54.129+0000 D REPL [rsBackgroundSync] bgsync buffer has 51119177 bytes 2015-04-01T16:22:54.132+0000 D REPL [rsBackgroundSync] bgsync buffer has 51120887 bytes 2015-04-01T16:22:54.135+0000 D REPL [rsBackgroundSync] bgsync buffer has 51122597 bytes 2015-04-01T16:22:54.138+0000 D REPL [rsBackgroundSync] bgsync buffer has 51124307 bytes 2015-04-01T16:22:54.141+0000 D REPL [rsBackgroundSync] bgsync buffer has 51126017 bytes 2015-04-01T16:22:54.142+0000 D REPL [rsBackgroundSync] bgsync buffer has 51127727 bytes 2015-04-01T16:22:54.146+0000 D COMMAND [conn24] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:54.146+0000 D COMMAND [conn24] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:54.147+0000 I COMMAND [conn24] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:54.164+0000 D REPL [rsBackgroundSync] bgsync buffer has 51129421 bytes 2015-04-01T16:22:54.164+0000 D REPL [rsBackgroundSync] bgsync buffer has 51131131 bytes 2015-04-01T16:22:54.169+0000 D REPL [rsBackgroundSync] bgsync buffer has 51132841 bytes 2015-04-01T16:22:54.169+0000 D REPL [rsBackgroundSync] bgsync buffer has 51134551 bytes 2015-04-01T16:22:54.169+0000 D REPL [rsBackgroundSync] bgsync buffer has 51136261 bytes 2015-04-01T16:22:54.172+0000 D REPL [rsBackgroundSync] bgsync buffer has 51137971 bytes 2015-04-01T16:22:54.175+0000 D REPL [rsBackgroundSync] bgsync buffer has 51139681 bytes 2015-04-01T16:22:54.182+0000 D REPL [rsBackgroundSync] bgsync buffer has 51141391 bytes 2015-04-01T16:22:54.182+0000 D REPL [rsBackgroundSync] bgsync buffer has 51143101 bytes 2015-04-01T16:22:54.182+0000 D REPL [rsBackgroundSync] bgsync buffer has 51144811 bytes 2015-04-01T16:22:54.187+0000 D REPL [rsBackgroundSync] bgsync buffer has 51146521 bytes 2015-04-01T16:22:54.188+0000 D REPL [rsBackgroundSync] bgsync buffer has 51148231 bytes 2015-04-01T16:22:54.192+0000 D REPL [rsBackgroundSync] bgsync buffer has 51149941 bytes 2015-04-01T16:22:54.193+0000 D REPL [rsBackgroundSync] bgsync buffer has 51151651 bytes 2015-04-01T16:22:54.198+0000 D REPL [rsBackgroundSync] bgsync buffer has 51153361 bytes 2015-04-01T16:22:54.198+0000 D REPL [rsBackgroundSync] bgsync buffer has 51155071 bytes 2015-04-01T16:22:54.198+0000 D REPL [rsBackgroundSync] bgsync buffer has 51156781 bytes 2015-04-01T16:22:54.204+0000 D REPL [rsBackgroundSync] bgsync buffer has 51158491 bytes 2015-04-01T16:22:54.204+0000 D REPL [rsBackgroundSync] bgsync buffer has 51160201 bytes 2015-04-01T16:22:54.210+0000 D REPL [rsBackgroundSync] bgsync buffer has 51161911 bytes 2015-04-01T16:22:54.210+0000 D REPL [rsBackgroundSync] bgsync buffer has 51163621 bytes 2015-04-01T16:22:54.210+0000 D REPL [rsBackgroundSync] bgsync buffer has 51165331 bytes 2015-04-01T16:22:54.214+0000 D REPL [rsBackgroundSync] bgsync buffer has 51167041 bytes 2015-04-01T16:22:54.214+0000 D REPL [rsBackgroundSync] bgsync buffer has 51168751 bytes 2015-04-01T16:22:54.218+0000 D REPL [rsBackgroundSync] bgsync buffer has 51170461 bytes 2015-04-01T16:22:54.225+0000 D REPL [rsBackgroundSync] bgsync buffer has 51172171 bytes 2015-04-01T16:22:54.225+0000 D REPL [rsBackgroundSync] bgsync buffer has 51173881 bytes 2015-04-01T16:22:54.233+0000 D REPL [rsBackgroundSync] bgsync buffer has 51175591 bytes 2015-04-01T16:22:54.233+0000 D REPL [rsBackgroundSync] bgsync buffer has 51177301 bytes 2015-04-01T16:22:54.233+0000 D REPL [rsBackgroundSync] bgsync buffer has 51179011 bytes 2015-04-01T16:22:54.233+0000 D REPL [rsBackgroundSync] bgsync buffer has 51180721 bytes 2015-04-01T16:22:54.243+0000 D REPL [rsBackgroundSync] bgsync buffer has 51182431 bytes 2015-04-01T16:22:54.243+0000 D REPL [rsBackgroundSync] bgsync buffer has 51184141 bytes 2015-04-01T16:22:54.243+0000 D REPL [rsBackgroundSync] bgsync buffer has 51185851 bytes 2015-04-01T16:22:54.243+0000 D REPL [rsBackgroundSync] bgsync buffer has 51187561 bytes 2015-04-01T16:22:54.256+0000 D REPL [rsBackgroundSync] bgsync buffer has 51189271 bytes 2015-04-01T16:22:54.256+0000 D REPL [rsBackgroundSync] bgsync buffer has 51190981 bytes 2015-04-01T16:22:54.256+0000 D REPL [rsBackgroundSync] bgsync buffer has 51192691 bytes 2015-04-01T16:22:54.256+0000 D REPL [rsBackgroundSync] bgsync buffer has 51194401 bytes 2015-04-01T16:22:54.256+0000 D REPL [rsBackgroundSync] bgsync buffer has 51196111 bytes 2015-04-01T16:22:54.265+0000 D REPL [rsBackgroundSync] bgsync buffer has 51197821 bytes 2015-04-01T16:22:54.265+0000 D REPL [rsBackgroundSync] bgsync buffer has 51199531 bytes 2015-04-01T16:22:54.265+0000 D REPL [rsBackgroundSync] bgsync buffer has 51201241 bytes 2015-04-01T16:22:54.265+0000 D REPL [rsBackgroundSync] bgsync buffer has 51202951 bytes 2015-04-01T16:22:54.276+0000 D REPL [rsBackgroundSync] bgsync buffer has 51204661 bytes 2015-04-01T16:22:54.276+0000 D REPL [rsBackgroundSync] bgsync buffer has 51206371 bytes 2015-04-01T16:22:54.276+0000 D REPL [rsBackgroundSync] bgsync buffer has 51208081 bytes 2015-04-01T16:22:54.276+0000 D REPL [rsBackgroundSync] bgsync buffer has 51209791 bytes 2015-04-01T16:22:54.276+0000 D REPL [rsBackgroundSync] bgsync buffer has 51211501 bytes 2015-04-01T16:22:54.285+0000 D REPL [rsBackgroundSync] bgsync buffer has 51213211 bytes 2015-04-01T16:22:54.285+0000 D REPL [rsBackgroundSync] bgsync buffer has 51214921 bytes 2015-04-01T16:22:54.285+0000 D REPL [rsBackgroundSync] bgsync buffer has 51216631 bytes 2015-04-01T16:22:54.286+0000 D REPL [rsBackgroundSync] bgsync buffer has 51218341 bytes 2015-04-01T16:22:54.296+0000 D REPL [rsBackgroundSync] bgsync buffer has 51220051 bytes 2015-04-01T16:22:54.296+0000 D REPL [rsBackgroundSync] bgsync buffer has 51221761 bytes 2015-04-01T16:22:54.296+0000 D REPL [rsBackgroundSync] bgsync buffer has 51223471 bytes 2015-04-01T16:22:54.296+0000 D REPL [rsBackgroundSync] bgsync buffer has 51225181 bytes 2015-04-01T16:22:54.305+0000 D REPL [rsBackgroundSync] bgsync buffer has 51226891 bytes 2015-04-01T16:22:54.305+0000 D REPL [rsBackgroundSync] bgsync buffer has 51228601 bytes 2015-04-01T16:22:54.306+0000 D REPL [rsBackgroundSync] bgsync buffer has 51230311 bytes 2015-04-01T16:22:54.306+0000 D REPL [rsBackgroundSync] bgsync buffer has 51232021 bytes 2015-04-01T16:22:54.316+0000 D REPL [rsBackgroundSync] bgsync buffer has 51233731 bytes 2015-04-01T16:22:54.316+0000 D REPL [rsBackgroundSync] bgsync buffer has 51235441 bytes 2015-04-01T16:22:54.316+0000 D REPL [rsBackgroundSync] bgsync buffer has 51237151 bytes 2015-04-01T16:22:54.316+0000 D REPL [rsBackgroundSync] bgsync buffer has 51238861 bytes 2015-04-01T16:22:54.322+0000 D REPL [rsBackgroundSync] bgsync buffer has 51240571 bytes 2015-04-01T16:22:54.322+0000 D REPL [rsBackgroundSync] bgsync buffer has 51242281 bytes 2015-04-01T16:22:54.342+0000 D REPL [rsBackgroundSync] bgsync buffer has 51243975 bytes 2015-04-01T16:22:54.346+0000 D REPL [rsBackgroundSync] bgsync buffer has 51245685 bytes 2015-04-01T16:22:54.346+0000 D REPL [rsBackgroundSync] bgsync buffer has 51247395 bytes 2015-04-01T16:22:54.349+0000 D REPL [rsBackgroundSync] bgsync buffer has 51249105 bytes 2015-04-01T16:22:54.355+0000 D REPL [rsBackgroundSync] bgsync buffer has 51250815 bytes 2015-04-01T16:22:54.356+0000 D REPL [rsBackgroundSync] bgsync buffer has 51252525 bytes 2015-04-01T16:22:54.356+0000 D REPL [rsBackgroundSync] bgsync buffer has 51254235 bytes 2015-04-01T16:22:54.361+0000 D REPL [rsBackgroundSync] bgsync buffer has 51255945 bytes 2015-04-01T16:22:54.361+0000 D REPL [rsBackgroundSync] bgsync buffer has 51257655 bytes 2015-04-01T16:22:54.365+0000 D REPL [rsBackgroundSync] bgsync buffer has 51259365 bytes 2015-04-01T16:22:54.365+0000 D REPL [rsBackgroundSync] bgsync buffer has 51261075 bytes 2015-04-01T16:22:54.370+0000 D REPL [rsBackgroundSync] bgsync buffer has 51262785 bytes 2015-04-01T16:22:54.370+0000 D REPL [rsBackgroundSync] bgsync buffer has 51264495 bytes 2015-04-01T16:22:54.376+0000 D REPL [rsBackgroundSync] bgsync buffer has 51266205 bytes 2015-04-01T16:22:54.376+0000 D REPL [rsBackgroundSync] bgsync buffer has 51267915 bytes 2015-04-01T16:22:54.382+0000 D REPL [rsBackgroundSync] bgsync buffer has 51269625 bytes 2015-04-01T16:22:54.382+0000 D REPL [rsBackgroundSync] bgsync buffer has 51271335 bytes 2015-04-01T16:22:54.382+0000 D REPL [rsBackgroundSync] bgsync buffer has 51273045 bytes 2015-04-01T16:22:54.387+0000 D REPL [rsBackgroundSync] bgsync buffer has 51274755 bytes 2015-04-01T16:22:54.387+0000 D REPL [rsBackgroundSync] bgsync buffer has 51276465 bytes 2015-04-01T16:22:54.394+0000 D REPL [rsBackgroundSync] bgsync buffer has 51278175 bytes 2015-04-01T16:22:54.394+0000 D REPL [rsBackgroundSync] bgsync buffer has 51279885 bytes 2015-04-01T16:22:54.394+0000 D REPL [rsBackgroundSync] bgsync buffer has 51281595 bytes 2015-04-01T16:22:54.399+0000 D REPL [rsBackgroundSync] bgsync buffer has 51283305 bytes 2015-04-01T16:22:54.399+0000 D REPL [rsBackgroundSync] bgsync buffer has 51285015 bytes 2015-04-01T16:22:54.406+0000 D REPL [rsBackgroundSync] bgsync buffer has 51286725 bytes 2015-04-01T16:22:54.406+0000 D REPL [rsBackgroundSync] bgsync buffer has 51288435 bytes 2015-04-01T16:22:54.406+0000 D REPL [rsBackgroundSync] bgsync buffer has 51290145 bytes 2015-04-01T16:22:54.409+0000 D REPL [rsBackgroundSync] bgsync buffer has 51291855 bytes 2015-04-01T16:22:54.409+0000 D REPL [rsBackgroundSync] bgsync buffer has 51293565 bytes 2015-04-01T16:22:54.414+0000 D REPL [rsBackgroundSync] bgsync buffer has 51295275 bytes 2015-04-01T16:22:54.414+0000 D REPL [rsBackgroundSync] bgsync buffer has 51296985 bytes 2015-04-01T16:22:54.422+0000 D REPL [rsBackgroundSync] bgsync buffer has 51298695 bytes 2015-04-01T16:22:54.423+0000 D REPL [rsBackgroundSync] bgsync buffer has 51300405 bytes 2015-04-01T16:22:54.423+0000 D REPL [rsBackgroundSync] bgsync buffer has 51302115 bytes 2015-04-01T16:22:54.424+0000 D COMMAND [conn23] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:22:54.426+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 1ms 2015-04-01T16:22:54.426+0000 D REPL [rsBackgroundSync] bgsync buffer has 51303825 bytes 2015-04-01T16:22:54.431+0000 D REPL [rsBackgroundSync] bgsync buffer has 51305535 bytes 2015-04-01T16:22:54.431+0000 D REPL [rsBackgroundSync] bgsync buffer has 51307245 bytes 2015-04-01T16:22:54.434+0000 D REPL [rsBackgroundSync] bgsync buffer has 51308955 bytes 2015-04-01T16:22:54.437+0000 D REPL [rsBackgroundSync] bgsync buffer has 51310665 bytes 2015-04-01T16:22:54.446+0000 D REPL [rsBackgroundSync] bgsync buffer has 51312375 bytes 2015-04-01T16:22:54.446+0000 D REPL [rsBackgroundSync] bgsync buffer has 51314085 bytes 2015-04-01T16:22:54.446+0000 D REPL [rsBackgroundSync] bgsync buffer has 51315795 bytes 2015-04-01T16:22:54.446+0000 D REPL [rsBackgroundSync] bgsync buffer has 51317505 bytes 2015-04-01T16:22:54.456+0000 D REPL [rsBackgroundSync] bgsync buffer has 51319215 bytes 2015-04-01T16:22:54.456+0000 D REPL [rsBackgroundSync] bgsync buffer has 51320925 bytes 2015-04-01T16:22:54.456+0000 D REPL [rsBackgroundSync] bgsync buffer has 51322635 bytes 2015-04-01T16:22:54.456+0000 D REPL [rsBackgroundSync] bgsync buffer has 51324345 bytes 2015-04-01T16:22:54.467+0000 D REPL [rsBackgroundSync] bgsync buffer has 51326055 bytes 2015-04-01T16:22:54.468+0000 D REPL [rsBackgroundSync] bgsync buffer has 51327765 bytes 2015-04-01T16:22:54.468+0000 D REPL [rsBackgroundSync] bgsync buffer has 51329475 bytes 2015-04-01T16:22:54.468+0000 D REPL [rsBackgroundSync] bgsync buffer has 51331185 bytes 2015-04-01T16:22:54.468+0000 D REPL [rsBackgroundSync] bgsync buffer has 51332895 bytes 2015-04-01T16:22:54.475+0000 D REPL [rsBackgroundSync] bgsync buffer has 51334605 bytes 2015-04-01T16:22:54.475+0000 D REPL [rsBackgroundSync] bgsync buffer has 51336315 bytes 2015-04-01T16:22:54.475+0000 D REPL [rsBackgroundSync] bgsync buffer has 51338025 bytes 2015-04-01T16:22:54.486+0000 D REPL [rsBackgroundSync] bgsync buffer has 51339735 bytes 2015-04-01T16:22:54.486+0000 D REPL [rsBackgroundSync] bgsync buffer has 51341445 bytes 2015-04-01T16:22:54.486+0000 D REPL [rsBackgroundSync] bgsync buffer has 51343155 bytes 2015-04-01T16:22:54.486+0000 D REPL [rsBackgroundSync] bgsync buffer has 51344865 bytes 2015-04-01T16:22:54.486+0000 D REPL [rsBackgroundSync] bgsync buffer has 51346575 bytes 2015-04-01T16:22:54.494+0000 D REPL [rsBackgroundSync] bgsync buffer has 51348285 bytes 2015-04-01T16:22:54.494+0000 D REPL [rsBackgroundSync] bgsync buffer has 51349995 bytes 2015-04-01T16:22:54.494+0000 D REPL [rsBackgroundSync] bgsync buffer has 51351705 bytes 2015-04-01T16:22:54.501+0000 D REPL [rsBackgroundSync] bgsync buffer has 51353415 bytes 2015-04-01T16:22:54.501+0000 D REPL [rsBackgroundSync] bgsync buffer has 51355125 bytes 2015-04-01T16:22:54.501+0000 D REPL [rsBackgroundSync] bgsync buffer has 51356835 bytes 2015-04-01T16:22:54.519+0000 D COMMAND [conn23] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:22:54.519+0000 I COMMAND [conn23] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:22:54.529+0000 D REPL [rsBackgroundSync] bgsync buffer has 51358529 bytes 2015-04-01T16:22:54.529+0000 D REPL [rsBackgroundSync] bgsync buffer has 51360239 bytes 2015-04-01T16:22:54.532+0000 D REPL [rsBackgroundSync] bgsync buffer has 51361949 bytes 2015-04-01T16:22:54.535+0000 D REPL [rsBackgroundSync] bgsync buffer has 51363659 bytes 2015-04-01T16:22:54.538+0000 D REPL [rsBackgroundSync] bgsync buffer has 51365369 bytes 2015-04-01T16:22:54.542+0000 D REPL [rsBackgroundSync] bgsync buffer has 51367079 bytes 2015-04-01T16:22:54.542+0000 D REPL [rsBackgroundSync] bgsync buffer has 51368789 bytes 2015-04-01T16:22:54.546+0000 D REPL [rsBackgroundSync] bgsync buffer has 51370499 bytes 2015-04-01T16:22:54.550+0000 D REPL [rsBackgroundSync] bgsync buffer has 51372209 bytes 2015-04-01T16:22:54.550+0000 D REPL [rsBackgroundSync] bgsync buffer has 51373919 bytes 2015-04-01T16:22:54.553+0000 D REPL [rsBackgroundSync] bgsync buffer has 51375629 bytes 2015-04-01T16:22:54.557+0000 D REPL [rsBackgroundSync] bgsync buffer has 51377339 bytes 2015-04-01T16:22:54.560+0000 D REPL [rsBackgroundSync] bgsync buffer has 51379049 bytes 2015-04-01T16:22:54.565+0000 D REPL [rsBackgroundSync] bgsync buffer has 51380759 bytes 2015-04-01T16:22:54.565+0000 D REPL [rsBackgroundSync] bgsync buffer has 51382469 bytes 2015-04-01T16:22:54.570+0000 D REPL [rsBackgroundSync] bgsync buffer has 51384179 bytes 2015-04-01T16:22:54.570+0000 D REPL [rsBackgroundSync] bgsync buffer has 51385889 bytes 2015-04-01T16:22:54.577+0000 D REPL [rsBackgroundSync] bgsync buffer has 51387599 bytes 2015-04-01T16:22:54.577+0000 D REPL [rsBackgroundSync] bgsync buffer has 51389309 bytes 2015-04-01T16:22:54.581+0000 D REPL [rsBackgroundSync] bgsync buffer has 51391019 bytes 2015-04-01T16:22:54.584+0000 D REPL [rsBackgroundSync] bgsync buffer has 51392729 bytes 2015-04-01T16:22:54.587+0000 D REPL [rsBackgroundSync] bgsync buffer has 51394439 bytes 2015-04-01T16:22:54.592+0000 D REPL [rsBackgroundSync] bgsync buffer has 51396149 bytes 2015-04-01T16:22:54.592+0000 D REPL [rsBackgroundSync] bgsync buffer has 51397859 bytes 2015-04-01T16:22:54.596+0000 D REPL [rsBackgroundSync] bgsync buffer has 51399569 bytes 2015-04-01T16:22:54.600+0000 D REPL [rsBackgroundSync] bgsync buffer has 51401279 bytes 2015-04-01T16:22:54.600+0000 D REPL [rsBackgroundSync] bgsync buffer has 51402989 bytes 2015-04-01T16:22:54.607+0000 D REPL [rsBackgroundSync] bgsync buffer has 51404699 bytes 2015-04-01T16:22:54.607+0000 D REPL [rsBackgroundSync] bgsync buffer has 51406409 bytes 2015-04-01T16:22:54.613+0000 D REPL [rsBackgroundSync] bgsync buffer has 51408119 bytes 2015-04-01T16:22:54.617+0000 D REPL [rsBackgroundSync] bgsync buffer has 51409829 bytes 2015-04-01T16:22:54.621+0000 D REPL [rsBackgroundSync] bgsync buffer has 51411539 bytes 2015-04-01T16:22:54.621+0000 D REPL [rsBackgroundSync] bgsync buffer has 51413249 bytes 2015-04-01T16:22:54.625+0000 D REPL [rsBackgroundSync] bgsync buffer has 51414959 bytes 2015-04-01T16:22:54.630+0000 D REPL [rsBackgroundSync] bgsync buffer has 51416669 bytes 2015-04-01T16:22:54.630+0000 D REPL [rsBackgroundSync] bgsync buffer has 51418379 bytes 2015-04-01T16:22:54.634+0000 D REPL [rsBackgroundSync] bgsync buffer has 51420089 bytes 2015-04-01T16:22:54.634+0000 D REPL [rsBackgroundSync] bgsync buffer has 51421799 bytes 2015-04-01T16:22:54.637+0000 D REPL [rsBackgroundSync] bgsync buffer has 51423509 bytes 2015-04-01T16:22:54.641+0000 D REPL [rsBackgroundSync] bgsync buffer has 51425219 bytes 2015-04-01T16:22:54.647+0000 D REPL [rsBackgroundSync] bgsync buffer has 51426929 bytes 2015-04-01T16:22:54.647+0000 D REPL [rsBackgroundSync] bgsync buffer has 51428639 bytes 2015-04-01T16:22:54.647+0000 D REPL [rsBackgroundSync] bgsync buffer has 51430349 bytes 2015-04-01T16:22:54.650+0000 D REPL [rsBackgroundSync] bgsync buffer has 51432059 bytes 2015-04-01T16:22:54.654+0000 D REPL [rsBackgroundSync] bgsync buffer has 51433769 bytes 2015-04-01T16:22:54.658+0000 D REPL [rsBackgroundSync] bgsync buffer has 51435479 bytes 2015-04-01T16:22:54.658+0000 D REPL [rsBackgroundSync] bgsync buffer has 51437189 bytes 2015-04-01T16:22:54.663+0000 D REPL [rsBackgroundSync] bgsync buffer has 51438899 bytes 2015-04-01T16:22:54.669+0000 D REPL [rsBackgroundSync] bgsync buffer has 51440609 bytes 2015-04-01T16:22:54.669+0000 D REPL [rsBackgroundSync] bgsync buffer has 51442319 bytes 2015-04-01T16:22:54.669+0000 D REPL [rsBackgroundSync] bgsync buffer has 51444029 bytes 2015-04-01T16:22:54.672+0000 D REPL [rsBackgroundSync] bgsync buffer has 51445739 bytes 2015-04-01T16:22:54.676+0000 D REPL [rsBackgroundSync] bgsync buffer has 51447449 bytes 2015-04-01T16:22:54.682+0000 D REPL [rsBackgroundSync] bgsync buffer has 51449159 bytes 2015-04-01T16:22:54.682+0000 D REPL [rsBackgroundSync] bgsync buffer has 51450869 bytes 2015-04-01T16:22:54.688+0000 D REPL [rsBackgroundSync] bgsync buffer has 51452579 bytes 2015-04-01T16:22:54.688+0000 D REPL [rsBackgroundSync] bgsync buffer has 51454289 bytes 2015-04-01T16:22:54.694+0000 D REPL [rsBackgroundSync] bgsync buffer has 51455999 bytes 2015-04-01T16:22:54.694+0000 D REPL [rsBackgroundSync] bgsync buffer has 51457709 bytes 2015-04-01T16:22:54.699+0000 D REPL [rsBackgroundSync] bgsync buffer has 51459419 bytes 2015-04-01T16:22:54.699+0000 D REPL [rsBackgroundSync] bgsync buffer has 51461129 bytes 2015-04-01T16:22:54.699+0000 D REPL [rsBackgroundSync] bgsync buffer has 51462839 bytes 2015-04-01T16:22:54.704+0000 D REPL [rsBackgroundSync] bgsync buffer has 51464549 bytes 2015-04-01T16:22:54.709+0000 D REPL [rsBackgroundSync] bgsync buffer has 51466259 bytes 2015-04-01T16:22:54.709+0000 D REPL [rsBackgroundSync] bgsync buffer has 51467969 bytes 2015-04-01T16:22:54.714+0000 D REPL [rsBackgroundSync] bgsync buffer has 51469679 bytes 2015-04-01T16:22:54.733+0000 D REPL [rsBackgroundSync] bgsync buffer has 51471389 bytes 2015-04-01T16:22:54.741+0000 D REPL [rsBackgroundSync] bgsync buffer has 51473399 bytes 2015-04-01T16:22:54.745+0000 D REPL [rsBackgroundSync] bgsync buffer has 51475409 bytes 2015-04-01T16:22:54.751+0000 D REPL [rsBackgroundSync] bgsync buffer has 51477419 bytes 2015-04-01T16:22:54.757+0000 D REPL [rsBackgroundSync] bgsync buffer has 51479429 bytes 2015-04-01T16:22:54.763+0000 D REPL [rsBackgroundSync] bgsync buffer has 51481439 bytes 2015-04-01T16:22:54.770+0000 D REPL [rsBackgroundSync] bgsync buffer has 51483449 bytes 2015-04-01T16:22:54.776+0000 D REPL [rsBackgroundSync] bgsync buffer has 51485459 bytes 2015-04-01T16:22:54.782+0000 D REPL [rsBackgroundSync] bgsync buffer has 51487469 bytes 2015-04-01T16:22:54.791+0000 D REPL [rsBackgroundSync] bgsync buffer has 51489479 bytes 2015-04-01T16:22:54.800+0000 D REPL [rsBackgroundSync] bgsync buffer has 51491489 bytes 2015-04-01T16:22:54.812+0000 D REPL [rsBackgroundSync] bgsync buffer has 51493499 bytes 2015-04-01T16:22:54.818+0000 D REPL [rsBackgroundSync] bgsync buffer has 51495509 bytes 2015-04-01T16:22:54.824+0000 D REPL [rsBackgroundSync] bgsync buffer has 51497519 bytes 2015-04-01T16:22:54.833+0000 D REPL [rsBackgroundSync] bgsync buffer has 51499529 bytes 2015-04-01T16:22:54.839+0000 D REPL [rsBackgroundSync] bgsync buffer has 51501539 bytes 2015-04-01T16:22:54.846+0000 D REPL [rsBackgroundSync] bgsync buffer has 51503549 bytes 2015-04-01T16:22:54.855+0000 D REPL [rsBackgroundSync] bgsync buffer has 51505559 bytes 2015-04-01T16:22:54.864+0000 D REPL [rsBackgroundSync] bgsync buffer has 51507569 bytes 2015-04-01T16:22:54.870+0000 D REPL [rsBackgroundSync] bgsync buffer has 51509579 bytes 2015-04-01T16:22:54.879+0000 D REPL [rsBackgroundSync] bgsync buffer has 51511589 bytes 2015-04-01T16:22:54.888+0000 D REPL [rsBackgroundSync] bgsync buffer has 51513599 bytes 2015-04-01T16:22:54.894+0000 D REPL [rsBackgroundSync] bgsync buffer has 51515609 bytes 2015-04-01T16:22:54.903+0000 D REPL [rsBackgroundSync] bgsync buffer has 51517619 bytes 2015-04-01T16:22:54.912+0000 D REPL [rsBackgroundSync] bgsync buffer has 51519629 bytes 2015-04-01T16:22:54.921+0000 D REPL [rsBackgroundSync] bgsync buffer has 51521639 bytes 2015-04-01T16:22:54.930+0000 D REPL [rsBackgroundSync] bgsync buffer has 51523649 bytes 2015-04-01T16:22:54.942+0000 D REPL [rsBackgroundSync] bgsync buffer has 51525659 bytes 2015-04-01T16:22:54.951+0000 D REPL [rsBackgroundSync] bgsync buffer has 51527669 bytes 2015-04-01T16:22:54.960+0000 D REPL [rsBackgroundSync] bgsync buffer has 51529679 bytes 2015-04-01T16:22:54.969+0000 D REPL [rsBackgroundSync] bgsync buffer has 51531689 bytes 2015-04-01T16:22:54.978+0000 D REPL [rsBackgroundSync] bgsync buffer has 51533699 bytes 2015-04-01T16:22:54.990+0000 D REPL [rsBackgroundSync] bgsync buffer has 51535709 bytes 2015-04-01T16:22:54.999+0000 D REPL [rsBackgroundSync] bgsync buffer has 51537719 bytes 2015-04-01T16:22:55.008+0000 D REPL [rsBackgroundSync] bgsync buffer has 51539729 bytes 2015-04-01T16:22:55.020+0000 D REPL [rsBackgroundSync] bgsync buffer has 51541739 bytes 2015-04-01T16:22:55.029+0000 D REPL [rsBackgroundSync] bgsync buffer has 51543749 bytes 2015-04-01T16:22:55.041+0000 D REPL [rsBackgroundSync] bgsync buffer has 51545759 bytes 2015-04-01T16:22:55.053+0000 D REPL [rsBackgroundSync] bgsync buffer has 51547769 bytes 2015-04-01T16:22:55.065+0000 D REPL [rsBackgroundSync] bgsync buffer has 51549779 bytes 2015-04-01T16:22:55.074+0000 D REPL [rsBackgroundSync] bgsync buffer has 51551789 bytes 2015-04-01T16:22:55.086+0000 D REPL [rsBackgroundSync] bgsync buffer has 51553799 bytes 2015-04-01T16:22:55.098+0000 D REPL [rsBackgroundSync] bgsync buffer has 51555809 bytes 2015-04-01T16:22:55.110+0000 D REPL [rsBackgroundSync] bgsync buffer has 51557819 bytes 2015-04-01T16:22:55.122+0000 D REPL [rsBackgroundSync] bgsync buffer has 51559829 bytes 2015-04-01T16:22:55.134+0000 D REPL [rsBackgroundSync] bgsync buffer has 51561839 bytes 2015-04-01T16:22:55.146+0000 D REPL [rsBackgroundSync] bgsync buffer has 51563849 bytes 2015-04-01T16:22:55.158+0000 D REPL [rsBackgroundSync] bgsync buffer has 51565859 bytes 2015-04-01T16:22:55.170+0000 D REPL [rsBackgroundSync] bgsync buffer has 51567869 bytes 2015-04-01T16:22:55.185+0000 D REPL [rsBackgroundSync] bgsync buffer has 51569879 bytes 2015-04-01T16:22:55.197+0000 D REPL [rsBackgroundSync] bgsync buffer has 51571889 bytes 2015-04-01T16:22:55.209+0000 D REPL [rsBackgroundSync] bgsync buffer has 51573899 bytes 2015-04-01T16:22:55.224+0000 D REPL [rsBackgroundSync] bgsync buffer has 51575909 bytes 2015-04-01T16:22:55.238+0000 D REPL [rsBackgroundSync] bgsync buffer has 51577919 bytes 2015-04-01T16:22:55.253+0000 D REPL [rsBackgroundSync] bgsync buffer has 51579929 bytes 2015-04-01T16:22:55.265+0000 D REPL [rsBackgroundSync] bgsync buffer has 51581939 bytes 2015-04-01T16:22:55.277+0000 D REPL [rsBackgroundSync] bgsync buffer has 51583949 bytes 2015-04-01T16:22:55.292+0000 D REPL [rsBackgroundSync] bgsync buffer has 51585959 bytes 2015-04-01T16:22:55.304+0000 D REPL [rsBackgroundSync] bgsync buffer has 51587969 bytes 2015-04-01T16:22:55.319+0000 D REPL [rsBackgroundSync] bgsync buffer has 51589979 bytes 2015-04-01T16:22:55.334+0000 D REPL [rsBackgroundSync] bgsync buffer has 51591989 bytes 2015-04-01T16:22:55.349+0000 D REPL [rsBackgroundSync] bgsync buffer has 51593999 bytes 2015-04-01T16:22:55.352+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:55.352+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:22:55.353+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:55.354+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:57.354Z 2015-04-01T16:22:55.364+0000 D REPL [rsBackgroundSync] bgsync buffer has 51596009 bytes 2015-04-01T16:22:55.379+0000 D REPL [rsBackgroundSync] bgsync buffer has 51598019 bytes 2015-04-01T16:22:55.394+0000 D REPL [rsBackgroundSync] bgsync buffer has 51600029 bytes 2015-04-01T16:22:55.406+0000 D REPL [rsBackgroundSync] bgsync buffer has 51602039 bytes 2015-04-01T16:22:55.424+0000 D REPL [rsBackgroundSync] bgsync buffer has 51604049 bytes 2015-04-01T16:22:55.450+0000 D REPL [rsBackgroundSync] bgsync buffer has 51605943 bytes 2015-04-01T16:22:55.450+0000 D REPL [rsBackgroundSync] bgsync buffer has 51607653 bytes 2015-04-01T16:22:55.453+0000 D REPL [rsBackgroundSync] bgsync buffer has 51609363 bytes 2015-04-01T16:22:55.460+0000 D REPL [rsBackgroundSync] bgsync buffer has 51611073 bytes 2015-04-01T16:22:55.460+0000 D REPL [rsBackgroundSync] bgsync buffer has 51612783 bytes 2015-04-01T16:22:55.460+0000 D REPL [rsBackgroundSync] bgsync buffer has 51614493 bytes 2015-04-01T16:22:55.464+0000 D REPL [rsBackgroundSync] bgsync buffer has 51616203 bytes 2015-04-01T16:22:55.467+0000 D REPL [rsBackgroundSync] bgsync buffer has 51617913 bytes 2015-04-01T16:22:55.467+0000 D REPL [rsBackgroundSync] bgsync buffer has 51619623 bytes 2015-04-01T16:22:55.472+0000 D REPL [rsBackgroundSync] bgsync buffer has 51621333 bytes 2015-04-01T16:22:55.473+0000 D REPL [rsBackgroundSync] bgsync buffer has 51623043 bytes 2015-04-01T16:22:55.477+0000 D REPL [rsBackgroundSync] bgsync buffer has 51624753 bytes 2015-04-01T16:22:55.477+0000 D REPL [rsBackgroundSync] bgsync buffer has 51626463 bytes 2015-04-01T16:22:55.483+0000 D REPL [rsBackgroundSync] bgsync buffer has 51628173 bytes 2015-04-01T16:22:55.483+0000 D REPL [rsBackgroundSync] bgsync buffer has 51629883 bytes 2015-04-01T16:22:55.489+0000 D REPL [rsBackgroundSync] bgsync buffer has 51631593 bytes 2015-04-01T16:22:55.489+0000 D REPL [rsBackgroundSync] bgsync buffer has 51633303 bytes 2015-04-01T16:22:55.489+0000 D REPL [rsBackgroundSync] bgsync buffer has 51635013 bytes 2015-04-01T16:22:55.495+0000 D REPL [rsBackgroundSync] bgsync buffer has 51636723 bytes 2015-04-01T16:22:55.495+0000 D REPL [rsBackgroundSync] bgsync buffer has 51638433 bytes 2015-04-01T16:22:55.501+0000 D REPL [rsBackgroundSync] bgsync buffer has 51640143 bytes 2015-04-01T16:22:55.501+0000 D REPL [rsBackgroundSync] bgsync buffer has 51641853 bytes 2015-04-01T16:22:55.502+0000 D REPL [rsBackgroundSync] bgsync buffer has 51643563 bytes 2015-04-01T16:22:55.509+0000 D REPL [rsBackgroundSync] bgsync buffer has 51645273 bytes 2015-04-01T16:22:55.509+0000 D REPL [rsBackgroundSync] bgsync buffer has 51646983 bytes 2015-04-01T16:22:55.510+0000 D REPL [rsBackgroundSync] bgsync buffer has 51648693 bytes 2015-04-01T16:22:55.515+0000 D REPL [rsBackgroundSync] bgsync buffer has 51650403 bytes 2015-04-01T16:22:55.516+0000 D REPL [rsBackgroundSync] bgsync buffer has 51652113 bytes 2015-04-01T16:22:55.516+0000 D REPL [rsBackgroundSync] bgsync buffer has 51653823 bytes 2015-04-01T16:22:55.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 51655533 bytes 2015-04-01T16:22:55.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 51657243 bytes 2015-04-01T16:22:55.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 51658953 bytes 2015-04-01T16:22:55.530+0000 D REPL [rsBackgroundSync] bgsync buffer has 51660663 bytes 2015-04-01T16:22:55.530+0000 D REPL [rsBackgroundSync] bgsync buffer has 51662373 bytes 2015-04-01T16:22:55.530+0000 D REPL [rsBackgroundSync] bgsync buffer has 51664083 bytes 2015-04-01T16:22:55.545+0000 D REPL [rsBackgroundSync] bgsync buffer has 51665793 bytes 2015-04-01T16:22:55.545+0000 D REPL [rsBackgroundSync] bgsync buffer has 51667503 bytes 2015-04-01T16:22:55.545+0000 D REPL [rsBackgroundSync] bgsync buffer has 51669213 bytes 2015-04-01T16:22:55.545+0000 D REPL [rsBackgroundSync] bgsync buffer has 51670923 bytes 2015-04-01T16:22:55.545+0000 D REPL [rsBackgroundSync] bgsync buffer has 51672633 bytes 2015-04-01T16:22:55.545+0000 D REPL [rsBackgroundSync] bgsync buffer has 51674343 bytes 2015-04-01T16:22:55.560+0000 D REPL [rsBackgroundSync] bgsync buffer has 51676053 bytes 2015-04-01T16:22:55.560+0000 D REPL [rsBackgroundSync] bgsync buffer has 51677763 bytes 2015-04-01T16:22:55.560+0000 D REPL [rsBackgroundSync] bgsync buffer has 51679473 bytes 2015-04-01T16:22:55.560+0000 D REPL [rsBackgroundSync] bgsync buffer has 51681183 bytes 2015-04-01T16:22:55.560+0000 D REPL [rsBackgroundSync] bgsync buffer has 51682893 bytes 2015-04-01T16:22:55.560+0000 D REPL [rsBackgroundSync] bgsync buffer has 51684603 bytes 2015-04-01T16:22:55.578+0000 D REPL [rsBackgroundSync] bgsync buffer has 51686313 bytes 2015-04-01T16:22:55.578+0000 D REPL [rsBackgroundSync] bgsync buffer has 51688023 bytes 2015-04-01T16:22:55.578+0000 D REPL [rsBackgroundSync] bgsync buffer has 51689733 bytes 2015-04-01T16:22:55.578+0000 D REPL [rsBackgroundSync] bgsync buffer has 51691443 bytes 2015-04-01T16:22:55.578+0000 D REPL [rsBackgroundSync] bgsync buffer has 51693153 bytes 2015-04-01T16:22:55.578+0000 D REPL [rsBackgroundSync] bgsync buffer has 51694863 bytes 2015-04-01T16:22:55.578+0000 D REPL [rsBackgroundSync] bgsync buffer has 51696573 bytes 2015-04-01T16:22:55.578+0000 D REPL [rsBackgroundSync] bgsync buffer has 51698283 bytes 2015-04-01T16:22:55.591+0000 D REPL [rsBackgroundSync] bgsync buffer has 51699993 bytes 2015-04-01T16:22:55.591+0000 D REPL [rsBackgroundSync] bgsync buffer has 51701703 bytes 2015-04-01T16:22:55.591+0000 D REPL [rsBackgroundSync] bgsync buffer has 51703413 bytes 2015-04-01T16:22:55.591+0000 D REPL [rsBackgroundSync] bgsync buffer has 51705123 bytes 2015-04-01T16:22:55.591+0000 D REPL [rsBackgroundSync] bgsync buffer has 51706833 bytes 2015-04-01T16:22:55.601+0000 D REPL [rsBackgroundSync] bgsync buffer has 51708543 bytes 2015-04-01T16:22:55.601+0000 D REPL [rsBackgroundSync] bgsync buffer has 51710253 bytes 2015-04-01T16:22:55.601+0000 D REPL [rsBackgroundSync] bgsync buffer has 51711963 bytes 2015-04-01T16:22:55.601+0000 D REPL [rsBackgroundSync] bgsync buffer has 51713673 bytes 2015-04-01T16:22:55.608+0000 D REPL [rsBackgroundSync] bgsync buffer has 51715383 bytes 2015-04-01T16:22:55.608+0000 D REPL [rsBackgroundSync] bgsync buffer has 51717093 bytes 2015-04-01T16:22:55.608+0000 D REPL [rsBackgroundSync] bgsync buffer has 51718803 bytes 2015-04-01T16:22:55.632+0000 D REPL [rsBackgroundSync] bgsync buffer has 51720653 bytes 2015-04-01T16:22:55.636+0000 D REPL [rsBackgroundSync] bgsync buffer has 51722663 bytes 2015-04-01T16:22:55.642+0000 D REPL [rsBackgroundSync] bgsync buffer has 51724673 bytes 2015-04-01T16:22:55.645+0000 D REPL [rsBackgroundSync] bgsync buffer has 51726683 bytes 2015-04-01T16:22:55.653+0000 D REPL [rsBackgroundSync] bgsync buffer has 51728693 bytes 2015-04-01T16:22:55.658+0000 D REPL [rsBackgroundSync] bgsync buffer has 51730703 bytes 2015-04-01T16:22:55.663+0000 D REPL [rsBackgroundSync] bgsync buffer has 51732713 bytes 2015-04-01T16:22:55.670+0000 D REPL [rsBackgroundSync] bgsync buffer has 51734723 bytes 2015-04-01T16:22:55.679+0000 D REPL [rsBackgroundSync] bgsync buffer has 51736733 bytes 2015-04-01T16:22:55.685+0000 D REPL [rsBackgroundSync] bgsync buffer has 51738743 bytes 2015-04-01T16:22:55.693+0000 D REPL [rsBackgroundSync] bgsync buffer has 51740753 bytes 2015-04-01T16:22:55.699+0000 D REPL [rsBackgroundSync] bgsync buffer has 51742763 bytes 2015-04-01T16:22:55.705+0000 D REPL [rsBackgroundSync] bgsync buffer has 51744773 bytes 2015-04-01T16:22:55.714+0000 D REPL [rsBackgroundSync] bgsync buffer has 51746783 bytes 2015-04-01T16:22:55.720+0000 D REPL [rsBackgroundSync] bgsync buffer has 51748793 bytes 2015-04-01T16:22:55.726+0000 D REPL [rsBackgroundSync] bgsync buffer has 51750803 bytes 2015-04-01T16:22:55.735+0000 D REPL [rsBackgroundSync] bgsync buffer has 51752813 bytes 2015-04-01T16:22:55.741+0000 D REPL [rsBackgroundSync] bgsync buffer has 51754823 bytes 2015-04-01T16:22:55.751+0000 D REPL [rsBackgroundSync] bgsync buffer has 51756833 bytes 2015-04-01T16:22:55.757+0000 D REPL [rsBackgroundSync] bgsync buffer has 51758843 bytes 2015-04-01T16:22:55.766+0000 D REPL [rsBackgroundSync] bgsync buffer has 51760853 bytes 2015-04-01T16:22:55.776+0000 D REPL [rsBackgroundSync] bgsync buffer has 51762863 bytes 2015-04-01T16:22:55.788+0000 D REPL [rsBackgroundSync] bgsync buffer has 51764873 bytes 2015-04-01T16:22:55.797+0000 D REPL [rsBackgroundSync] bgsync buffer has 51766883 bytes 2015-04-01T16:22:55.809+0000 D REPL [rsBackgroundSync] bgsync buffer has 51768893 bytes 2015-04-01T16:22:55.818+0000 D REPL [rsBackgroundSync] bgsync buffer has 51770903 bytes 2015-04-01T16:22:55.830+0000 D REPL [rsBackgroundSync] bgsync buffer has 51772913 bytes 2015-04-01T16:22:55.845+0000 D REPL [rsBackgroundSync] bgsync buffer has 51774923 bytes 2015-04-01T16:22:55.860+0000 D REPL [rsBackgroundSync] bgsync buffer has 51776933 bytes 2015-04-01T16:22:55.874+0000 D REPL [rsBackgroundSync] bgsync buffer has 51778943 bytes 2015-04-01T16:22:55.893+0000 D REPL [rsBackgroundSync] bgsync buffer has 51780953 bytes 2015-04-01T16:22:55.908+0000 D REPL [rsBackgroundSync] bgsync buffer has 51782963 bytes 2015-04-01T16:22:55.923+0000 D REPL [rsBackgroundSync] bgsync buffer has 51784973 bytes 2015-04-01T16:22:55.936+0000 D REPL [rsBackgroundSync] bgsync buffer has 51786983 bytes 2015-04-01T16:22:55.945+0000 D REPL [rsBackgroundSync] bgsync buffer has 51788993 bytes 2015-04-01T16:22:55.957+0000 D REPL [rsBackgroundSync] bgsync buffer has 51791003 bytes 2015-04-01T16:22:55.966+0000 D REPL [rsBackgroundSync] bgsync buffer has 51793013 bytes 2015-04-01T16:22:55.978+0000 D REPL [rsBackgroundSync] bgsync buffer has 51795023 bytes 2015-04-01T16:22:55.993+0000 D REPL [rsBackgroundSync] bgsync buffer has 51797033 bytes 2015-04-01T16:22:56.008+0000 D REPL [rsBackgroundSync] bgsync buffer has 51799043 bytes 2015-04-01T16:22:56.023+0000 D REPL [rsBackgroundSync] bgsync buffer has 51801053 bytes 2015-04-01T16:22:56.041+0000 D REPL [rsBackgroundSync] bgsync buffer has 51803063 bytes 2015-04-01T16:22:56.059+0000 D REPL [rsBackgroundSync] bgsync buffer has 51805073 bytes 2015-04-01T16:22:56.074+0000 D REPL [rsBackgroundSync] bgsync buffer has 51807083 bytes 2015-04-01T16:22:56.092+0000 D REPL [rsBackgroundSync] bgsync buffer has 51809093 bytes 2015-04-01T16:22:56.105+0000 D COMMAND [conn25] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:56.105+0000 D COMMAND [conn25] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:56.106+0000 I COMMAND [conn25] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:56.110+0000 D REPL [rsBackgroundSync] bgsync buffer has 51811103 bytes 2015-04-01T16:22:56.110+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:56.111+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:56.111+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:22:58.111Z 2015-04-01T16:22:56.129+0000 D REPL [rsBackgroundSync] bgsync buffer has 51813113 bytes 2015-04-01T16:22:56.148+0000 D COMMAND [conn24] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:56.149+0000 D COMMAND [conn24] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:56.149+0000 I COMMAND [conn24] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:56.150+0000 D REPL [rsBackgroundSync] bgsync buffer has 51815123 bytes 2015-04-01T16:22:56.168+0000 D REPL [rsBackgroundSync] bgsync buffer has 51817133 bytes 2015-04-01T16:22:56.186+0000 D REPL [rsBackgroundSync] bgsync buffer has 51819143 bytes 2015-04-01T16:22:56.207+0000 D REPL [rsBackgroundSync] bgsync buffer has 51821153 bytes 2015-04-01T16:22:56.225+0000 D REPL [rsBackgroundSync] bgsync buffer has 51823163 bytes 2015-04-01T16:22:56.246+0000 D REPL [rsBackgroundSync] bgsync buffer has 51825173 bytes 2015-04-01T16:22:56.264+0000 D REPL [rsBackgroundSync] bgsync buffer has 51827183 bytes 2015-04-01T16:22:56.285+0000 D REPL [rsBackgroundSync] bgsync buffer has 51829193 bytes 2015-04-01T16:22:56.303+0000 D REPL [rsBackgroundSync] bgsync buffer has 51831203 bytes 2015-04-01T16:22:56.315+0000 D REPL [rsBackgroundSync] bgsync buffer has 51833213 bytes 2015-04-01T16:22:56.330+0000 D REPL [rsBackgroundSync] bgsync buffer has 51835223 bytes 2015-04-01T16:22:56.342+0000 D REPL [rsBackgroundSync] bgsync buffer has 51837233 bytes 2015-04-01T16:22:56.357+0000 D REPL [rsBackgroundSync] bgsync buffer has 51839243 bytes 2015-04-01T16:22:56.369+0000 D REPL [rsBackgroundSync] bgsync buffer has 51841253 bytes 2015-04-01T16:22:56.384+0000 D REPL [rsBackgroundSync] bgsync buffer has 51843263 bytes 2015-04-01T16:22:56.399+0000 D REPL [rsBackgroundSync] bgsync buffer has 51845273 bytes 2015-04-01T16:22:56.414+0000 D REPL [rsBackgroundSync] bgsync buffer has 51847283 bytes 2015-04-01T16:22:56.426+0000 D REPL [rsBackgroundSync] bgsync buffer has 51849293 bytes 2015-04-01T16:22:56.441+0000 D REPL [rsBackgroundSync] bgsync buffer has 51851303 bytes 2015-04-01T16:22:56.456+0000 D REPL [rsBackgroundSync] bgsync buffer has 51853313 bytes 2015-04-01T16:22:56.481+0000 D REPL [rsBackgroundSync] bgsync buffer has 51855087 bytes 2015-04-01T16:22:56.485+0000 D REPL [rsBackgroundSync] bgsync buffer has 51856797 bytes 2015-04-01T16:22:56.485+0000 D REPL [rsBackgroundSync] bgsync buffer has 51858507 bytes 2015-04-01T16:22:56.488+0000 D REPL [rsBackgroundSync] bgsync buffer has 51860217 bytes 2015-04-01T16:22:56.493+0000 D REPL [rsBackgroundSync] bgsync buffer has 51861927 bytes 2015-04-01T16:22:56.493+0000 D REPL [rsBackgroundSync] bgsync buffer has 51863637 bytes 2015-04-01T16:22:56.497+0000 D REPL [rsBackgroundSync] bgsync buffer has 51865347 bytes 2015-04-01T16:22:56.497+0000 D REPL [rsBackgroundSync] bgsync buffer has 51867057 bytes 2015-04-01T16:22:56.501+0000 D REPL [rsBackgroundSync] bgsync buffer has 51868767 bytes 2015-04-01T16:22:56.506+0000 D REPL [rsBackgroundSync] bgsync buffer has 51870477 bytes 2015-04-01T16:22:56.506+0000 D REPL [rsBackgroundSync] bgsync buffer has 51872187 bytes 2015-04-01T16:22:56.511+0000 D REPL [rsBackgroundSync] bgsync buffer has 51873897 bytes 2015-04-01T16:22:56.511+0000 D REPL [rsBackgroundSync] bgsync buffer has 51875607 bytes 2015-04-01T16:22:56.514+0000 D REPL [rsBackgroundSync] bgsync buffer has 51877317 bytes 2015-04-01T16:22:56.514+0000 D REPL [rsBackgroundSync] bgsync buffer has 51879027 bytes 2015-04-01T16:22:56.517+0000 D REPL [rsBackgroundSync] bgsync buffer has 51880737 bytes 2015-04-01T16:22:56.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 51882447 bytes 2015-04-01T16:22:56.523+0000 D REPL [rsBackgroundSync] bgsync buffer has 51884157 bytes 2015-04-01T16:22:56.530+0000 D REPL [rsBackgroundSync] bgsync buffer has 51885867 bytes 2015-04-01T16:22:56.530+0000 D REPL [rsBackgroundSync] bgsync buffer has 51887577 bytes 2015-04-01T16:22:56.530+0000 D REPL [rsBackgroundSync] bgsync buffer has 51889287 bytes 2015-04-01T16:22:56.534+0000 D REPL [rsBackgroundSync] bgsync buffer has 51890997 bytes 2015-04-01T16:22:56.540+0000 D REPL [rsBackgroundSync] bgsync buffer has 51892707 bytes 2015-04-01T16:22:56.540+0000 D REPL [rsBackgroundSync] bgsync buffer has 51894417 bytes 2015-04-01T16:22:56.544+0000 D REPL [rsBackgroundSync] bgsync buffer has 51896127 bytes 2015-04-01T16:22:56.544+0000 D REPL [rsBackgroundSync] bgsync buffer has 51897837 bytes 2015-04-01T16:22:56.549+0000 D REPL [rsBackgroundSync] bgsync buffer has 51899547 bytes 2015-04-01T16:22:56.552+0000 D REPL [rsBackgroundSync] bgsync buffer has 51901257 bytes 2015-04-01T16:22:56.558+0000 D REPL [rsBackgroundSync] bgsync buffer has 51902967 bytes 2015-04-01T16:22:56.558+0000 D REPL [rsBackgroundSync] bgsync buffer has 51904677 bytes 2015-04-01T16:22:56.558+0000 D REPL [rsBackgroundSync] bgsync buffer has 51906387 bytes 2015-04-01T16:22:56.562+0000 D REPL [rsBackgroundSync] bgsync buffer has 51908097 bytes 2015-04-01T16:22:56.567+0000 D REPL [rsBackgroundSync] bgsync buffer has 51909807 bytes 2015-04-01T16:22:56.567+0000 D REPL [rsBackgroundSync] bgsync buffer has 51911517 bytes 2015-04-01T16:22:56.573+0000 D REPL [rsBackgroundSync] bgsync buffer has 51913227 bytes 2015-04-01T16:22:56.573+0000 D REPL [rsBackgroundSync] bgsync buffer has 51914937 bytes 2015-04-01T16:22:56.579+0000 D REPL [rsBackgroundSync] bgsync buffer has 51916647 bytes 2015-04-01T16:22:56.579+0000 D REPL [rsBackgroundSync] bgsync buffer has 51918357 bytes 2015-04-01T16:22:56.583+0000 D REPL [rsBackgroundSync] bgsync buffer has 51920067 bytes 2015-04-01T16:22:56.587+0000 D REPL [rsBackgroundSync] bgsync buffer has 51921777 bytes 2015-04-01T16:22:56.590+0000 D REPL [rsBackgroundSync] bgsync buffer has 51923487 bytes 2015-04-01T16:22:56.594+0000 D REPL [rsBackgroundSync] bgsync buffer has 51925197 bytes 2015-04-01T16:22:56.597+0000 D REPL [rsBackgroundSync] bgsync buffer has 51926907 bytes 2015-04-01T16:22:56.601+0000 D REPL [rsBackgroundSync] bgsync buffer has 51928617 bytes 2015-04-01T16:22:56.605+0000 D REPL [rsBackgroundSync] bgsync buffer has 51930327 bytes 2015-04-01T16:22:56.605+0000 D REPL [rsBackgroundSync] bgsync buffer has 51932037 bytes 2015-04-01T16:22:56.609+0000 D REPL [rsBackgroundSync] bgsync buffer has 51933747 bytes 2015-04-01T16:22:56.612+0000 D REPL [rsBackgroundSync] bgsync buffer has 51935457 bytes 2015-04-01T16:22:56.615+0000 D REPL [rsBackgroundSync] bgsync buffer has 51937167 bytes 2015-04-01T16:22:56.619+0000 D REPL [rsBackgroundSync] bgsync buffer has 51938877 bytes 2015-04-01T16:22:56.623+0000 D REPL [rsBackgroundSync] bgsync buffer has 51940587 bytes 2015-04-01T16:22:56.626+0000 D REPL [rsBackgroundSync] bgsync buffer has 51942297 bytes 2015-04-01T16:22:56.629+0000 D REPL [rsBackgroundSync] bgsync buffer has 51944007 bytes 2015-04-01T16:22:56.632+0000 D REPL [rsBackgroundSync] bgsync buffer has 51945717 bytes 2015-04-01T16:22:56.636+0000 D REPL [rsBackgroundSync] bgsync buffer has 51947427 bytes 2015-04-01T16:22:56.639+0000 D REPL [rsBackgroundSync] bgsync buffer has 51949137 bytes 2015-04-01T16:22:56.642+0000 D REPL [rsBackgroundSync] bgsync buffer has 51950847 bytes 2015-04-01T16:22:56.645+0000 D REPL [rsBackgroundSync] bgsync buffer has 51952557 bytes 2015-04-01T16:22:56.649+0000 D REPL [rsBackgroundSync] bgsync buffer has 51954267 bytes 2015-04-01T16:22:56.653+0000 D REPL [rsBackgroundSync] bgsync buffer has 51955977 bytes 2015-04-01T16:22:56.659+0000 D REPL [rsBackgroundSync] bgsync buffer has 51957687 bytes 2015-04-01T16:22:56.659+0000 D REPL [rsBackgroundSync] bgsync buffer has 51959397 bytes 2015-04-01T16:22:56.659+0000 D REPL [rsBackgroundSync] bgsync buffer has 51961107 bytes 2015-04-01T16:22:56.663+0000 D REPL [rsBackgroundSync] bgsync buffer has 51962817 bytes 2015-04-01T16:22:56.666+0000 D REPL [rsBackgroundSync] bgsync buffer has 51964527 bytes 2015-04-01T16:22:56.670+0000 D REPL [rsBackgroundSync] bgsync buffer has 51966237 bytes 2015-04-01T16:22:56.684+0000 D REPL [rsBackgroundSync] bgsync buffer has 51967947 bytes 2015-04-01T16:22:56.690+0000 D REPL [rsBackgroundSync] bgsync buffer has 51969957 bytes 2015-04-01T16:22:56.695+0000 D REPL [rsBackgroundSync] bgsync buffer has 51971967 bytes 2015-04-01T16:22:56.698+0000 D REPL [rsBackgroundSync] bgsync buffer has 51973977 bytes 2015-04-01T16:22:56.705+0000 D REPL [rsBackgroundSync] bgsync buffer has 51975987 bytes 2015-04-01T16:22:56.711+0000 D REPL [rsBackgroundSync] bgsync buffer has 51977997 bytes 2015-04-01T16:22:56.714+0000 D REPL [rsBackgroundSync] bgsync buffer has 51980007 bytes 2015-04-01T16:22:56.720+0000 D REPL [rsBackgroundSync] bgsync buffer has 51982017 bytes 2015-04-01T16:22:56.726+0000 D REPL [rsBackgroundSync] bgsync buffer has 51984027 bytes 2015-04-01T16:22:56.732+0000 D REPL [rsBackgroundSync] bgsync buffer has 51986037 bytes 2015-04-01T16:22:56.741+0000 D REPL [rsBackgroundSync] bgsync buffer has 51988047 bytes 2015-04-01T16:22:56.747+0000 D REPL [rsBackgroundSync] bgsync buffer has 51990057 bytes 2015-04-01T16:22:56.753+0000 D REPL [rsBackgroundSync] bgsync buffer has 51992067 bytes 2015-04-01T16:22:56.762+0000 D REPL [rsBackgroundSync] bgsync buffer has 51994077 bytes 2015-04-01T16:22:56.768+0000 D REPL [rsBackgroundSync] bgsync buffer has 51996087 bytes 2015-04-01T16:22:56.775+0000 D REPL [rsBackgroundSync] bgsync buffer has 51998097 bytes 2015-04-01T16:22:56.784+0000 D REPL [rsBackgroundSync] bgsync buffer has 52000107 bytes 2015-04-01T16:22:56.790+0000 D REPL [rsBackgroundSync] bgsync buffer has 52002117 bytes 2015-04-01T16:22:56.799+0000 D REPL [rsBackgroundSync] bgsync buffer has 52004127 bytes 2015-04-01T16:22:56.808+0000 D REPL [rsBackgroundSync] bgsync buffer has 52006137 bytes 2015-04-01T16:22:56.814+0000 D REPL [rsBackgroundSync] bgsync buffer has 52008147 bytes 2015-04-01T16:22:56.823+0000 D REPL [rsBackgroundSync] bgsync buffer has 52010157 bytes 2015-04-01T16:22:56.832+0000 D REPL [rsBackgroundSync] bgsync buffer has 52012167 bytes 2015-04-01T16:22:56.841+0000 D REPL [rsBackgroundSync] bgsync buffer has 52014177 bytes 2015-04-01T16:22:56.850+0000 D REPL [rsBackgroundSync] bgsync buffer has 52016187 bytes 2015-04-01T16:22:56.859+0000 D REPL [rsBackgroundSync] bgsync buffer has 52018197 bytes 2015-04-01T16:22:56.868+0000 D REPL [rsBackgroundSync] bgsync buffer has 52020207 bytes 2015-04-01T16:22:56.877+0000 D REPL [rsBackgroundSync] bgsync buffer has 52022217 bytes 2015-04-01T16:22:56.886+0000 D REPL [rsBackgroundSync] bgsync buffer has 52024227 bytes 2015-04-01T16:22:56.895+0000 D REPL [rsBackgroundSync] bgsync buffer has 52026237 bytes 2015-04-01T16:22:56.904+0000 D REPL [rsBackgroundSync] bgsync buffer has 52028247 bytes 2015-04-01T16:22:56.916+0000 D REPL [rsBackgroundSync] bgsync buffer has 52030257 bytes 2015-04-01T16:22:56.926+0000 D REPL [rsBackgroundSync] bgsync buffer has 52032267 bytes 2015-04-01T16:22:56.935+0000 D REPL [rsBackgroundSync] bgsync buffer has 52034277 bytes 2015-04-01T16:22:56.953+0000 D REPL [rsBackgroundSync] bgsync buffer has 52036287 bytes 2015-04-01T16:22:56.968+0000 D REPL [rsBackgroundSync] bgsync buffer has 52038297 bytes 2015-04-01T16:22:56.983+0000 D REPL [rsBackgroundSync] bgsync buffer has 52040307 bytes 2015-04-01T16:22:56.998+0000 D REPL [rsBackgroundSync] bgsync buffer has 52042317 bytes 2015-04-01T16:22:57.013+0000 D REPL [rsBackgroundSync] bgsync buffer has 52044327 bytes 2015-04-01T16:22:57.028+0000 D REPL [rsBackgroundSync] bgsync buffer has 52046337 bytes 2015-04-01T16:22:57.046+0000 D REPL [rsBackgroundSync] bgsync buffer has 52048347 bytes 2015-04-01T16:22:57.061+0000 D REPL [rsBackgroundSync] bgsync buffer has 52050357 bytes 2015-04-01T16:22:57.076+0000 D REPL [rsBackgroundSync] bgsync buffer has 52052367 bytes 2015-04-01T16:22:57.094+0000 D REPL [rsBackgroundSync] bgsync buffer has 52054377 bytes 2015-04-01T16:22:57.112+0000 D REPL [rsBackgroundSync] bgsync buffer has 52056387 bytes 2015-04-01T16:22:57.124+0000 D REPL [rsBackgroundSync] bgsync buffer has 52058397 bytes 2015-04-01T16:22:57.136+0000 D REPL [rsBackgroundSync] bgsync buffer has 52060407 bytes 2015-04-01T16:22:57.148+0000 D REPL [rsBackgroundSync] bgsync buffer has 52062417 bytes 2015-04-01T16:22:57.160+0000 D REPL [rsBackgroundSync] bgsync buffer has 52064427 bytes 2015-04-01T16:22:57.172+0000 D REPL [rsBackgroundSync] bgsync buffer has 52066437 bytes 2015-04-01T16:22:57.184+0000 D REPL [rsBackgroundSync] bgsync buffer has 52068447 bytes 2015-04-01T16:22:57.196+0000 D REPL [rsBackgroundSync] bgsync buffer has 52070457 bytes 2015-04-01T16:22:57.211+0000 D REPL [rsBackgroundSync] bgsync buffer has 52072467 bytes 2015-04-01T16:22:57.223+0000 D REPL [rsBackgroundSync] bgsync buffer has 52074477 bytes 2015-04-01T16:22:57.238+0000 D REPL [rsBackgroundSync] bgsync buffer has 52076487 bytes 2015-04-01T16:22:57.250+0000 D REPL [rsBackgroundSync] bgsync buffer has 52078497 bytes 2015-04-01T16:22:57.265+0000 D REPL [rsBackgroundSync] bgsync buffer has 52080507 bytes 2015-04-01T16:22:57.280+0000 D REPL [rsBackgroundSync] bgsync buffer has 52082517 bytes 2015-04-01T16:22:57.292+0000 D REPL [rsBackgroundSync] bgsync buffer has 52084527 bytes 2015-04-01T16:22:57.307+0000 D REPL [rsBackgroundSync] bgsync buffer has 52086537 bytes 2015-04-01T16:22:57.322+0000 D REPL [rsBackgroundSync] bgsync buffer has 52088547 bytes 2015-04-01T16:22:57.334+0000 D REPL [rsBackgroundSync] bgsync buffer has 52090557 bytes 2015-04-01T16:22:57.349+0000 D REPL [rsBackgroundSync] bgsync buffer has 52092567 bytes 2015-04-01T16:22:57.355+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:57.355+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:57.356+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:22:59.356Z 2015-04-01T16:22:57.367+0000 D REPL [rsBackgroundSync] bgsync buffer has 52094577 bytes 2015-04-01T16:22:57.379+0000 D REPL [rsBackgroundSync] bgsync buffer has 52096587 bytes 2015-04-01T16:22:57.394+0000 D REPL [rsBackgroundSync] bgsync buffer has 52098597 bytes 2015-04-01T16:22:57.409+0000 D REPL [rsBackgroundSync] bgsync buffer has 52100607 bytes 2015-04-01T16:22:57.443+0000 D REPL [rsBackgroundSync] bgsync buffer has 52102451 bytes 2015-04-01T16:22:57.486+0000 D REPL [rsBackgroundSync] bgsync buffer has 52104021 bytes 2015-04-01T16:22:57.526+0000 D REPL [rsBackgroundSync] bgsync buffer has 52105613 bytes 2015-04-01T16:22:57.555+0000 D REPL [rsBackgroundSync] bgsync buffer has 52107220 bytes 2015-04-01T16:22:57.581+0000 D REPL [rsBackgroundSync] bgsync buffer has 52108916 bytes 2015-04-01T16:22:57.622+0000 D REPL [rsBackgroundSync] bgsync buffer has 52110594 bytes 2015-04-01T16:22:57.646+0000 D REPL [rsBackgroundSync] bgsync buffer has 52112175 bytes 2015-04-01T16:22:57.669+0000 D REPL [rsBackgroundSync] bgsync buffer has 52113846 bytes 2015-04-01T16:22:57.700+0000 D REPL [rsBackgroundSync] bgsync buffer has 52115483 bytes 2015-04-01T16:22:57.733+0000 D REPL [rsBackgroundSync] bgsync buffer has 52117240 bytes 2015-04-01T16:22:57.783+0000 D REPL [rsBackgroundSync] bgsync buffer has 52118975 bytes 2015-04-01T16:22:57.805+0000 D REPL [rsBackgroundSync] bgsync buffer has 52120765 bytes 2015-04-01T16:22:58.149+0000 D COMMAND [conn24] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:58.149+0000 D COMMAND [conn24] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:22:58.277+0000 D COMMAND [conn25] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:58.277+0000 D COMMAND [conn25] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:22:58.277+0000 I COMMAND [conn24] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 127ms 2015-04-01T16:22:58.277+0000 I COMMAND [conn25] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:22:58.278+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:22:58.278+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:22:58.278+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:23:00.278Z 2015-04-01T16:22:59.356+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:22:59.356+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:22:59.357+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:23:01.356Z 2015-04-01T16:23:00.159+0000 D REPL [rsBackgroundSync] bgsync buffer has 85676849 bytes 2015-04-01T16:23:00.212+0000 D REPL [rsBackgroundSync] bgsync buffer has 85678562 bytes 2015-04-01T16:23:00.264+0000 D REPL [rsBackgroundSync] bgsync buffer has 85680283 bytes 2015-04-01T16:23:00.279+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:23:00.279+0000 D COMMAND [conn24] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:00.279+0000 D COMMAND [conn24] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:00.279+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:23:00.279+0000 I COMMAND [conn24] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:00.279+0000 D COMMAND [conn25] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:00.279+0000 D COMMAND [conn25] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:00.280+0000 I COMMAND [conn25] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:00.280+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:23:00.280+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:23:02.280Z 2015-04-01T16:23:00.304+0000 D REPL [rsBackgroundSync] bgsync buffer has 85681947 bytes 2015-04-01T16:23:00.332+0000 D REPL [rsBackgroundSync] bgsync buffer has 85683577 bytes 2015-04-01T16:23:00.475+0000 D REPL [rsBackgroundSync] bgsync buffer has 85685222 bytes 2015-04-01T16:23:00.557+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:00.558+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:00.558+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:00.558+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:01.356+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:23:01.356+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:23:01.356+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:23:01.356+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:23:03.356Z 2015-04-01T16:23:02.089+0000 I STORAGE [DataFileSync] flushing mmaps took 16ms for 10 files 2015-04-01T16:23:02.280+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:23:02.281+0000 D COMMAND [conn24] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:02.281+0000 D COMMAND [conn24] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:02.281+0000 I COMMAND [conn24] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:02.281+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:23:02.281+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:23:04.281Z 2015-04-01T16:23:02.282+0000 D COMMAND [conn25] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:02.282+0000 D COMMAND [conn25] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:02.282+0000 I COMMAND [conn25] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:03.356+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:23:03.596+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:23:03.597+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:23:05.597Z 2015-04-01T16:23:04.282+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:23:04.282+0000 D NETWORK [conn24] SocketException: remote: 127.0.0.1:63030 error: 9001 socket exception [CLOSED] server [127.0.0.1:63030] 2015-04-01T16:23:04.282+0000 I NETWORK [conn24] end connection 127.0.0.1:63030 (3 connections now open) 2015-04-01T16:23:04.283+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:23:04.283+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:23:06.283Z 2015-04-01T16:23:04.283+0000 D COMMAND [conn25] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:04.283+0000 D COMMAND [conn25] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:04.284+0000 I COMMAND [conn25] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:04.284+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:63045 #26 (4 connections now open) 2015-04-01T16:23:04.288+0000 D QUERY [conn26] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:23:04.288+0000 D COMMAND [conn26] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D6646387959544D724A66743676533963414A79764964727568714C66636D4276) } 2015-04-01T16:23:04.289+0000 I COMMAND [conn26] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D6646387959544D724A66743676533963414A79764964727568714C66636D4276) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:23:04.324+0000 D COMMAND [conn26] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D6646387959544D724A66743676533963414A79764964727568714C66636D4276426B784C4479552F4972727770414B4263557749773642382B612F6D33...), conversationId: 1 } 2015-04-01T16:23:04.324+0000 I COMMAND [conn26] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D6646387959544D724A66743676533963414A79764964727568714C66636D4276426B784C4479552F4972727770414B4263557749773642382B612F6D33...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:23:04.324+0000 D COMMAND [conn26] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:23:04.324+0000 I ACCESS [conn26] Successfully authenticated as principal __system on local 2015-04-01T16:23:04.324+0000 I COMMAND [conn26] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:23:04.325+0000 D COMMAND [conn26] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:04.325+0000 D COMMAND [conn26] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:04.326+0000 I COMMAND [conn26] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:04.402+0000 D COMMAND [conn23] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:04.403+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:04.404+0000 D COMMAND [conn23] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:04.404+0000 I COMMAND [conn23] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:05.598+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:23:05.599+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:23:05.599+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:23:07.599Z 2015-04-01T16:23:06.283+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:23:06.283+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:23:06.284+0000 D COMMAND [conn25] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:06.284+0000 D COMMAND [conn25] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:06.284+0000 I COMMAND [conn25] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:06.286+0000 D NETWORK [ReplExecNetThread-0] connected to server localhost:27019 (127.0.0.1) 2015-04-01T16:23:06.290+0000 W NETWORK [ReplExecNetThread-0] The server certificate does not match the host name localhost 2015-04-01T16:23:06.321+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:23:06.321+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:23:08.321Z 2015-04-01T16:23:06.326+0000 D COMMAND [conn26] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:06.326+0000 D COMMAND [conn26] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:06.326+0000 I COMMAND [conn26] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:07.267+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:63054 #27 (5 connections now open) 2015-04-01T16:23:07.443+0000 W NETWORK [conn27] no SSL certificate provided by peer 2015-04-01T16:23:07.490+0000 D QUERY [conn27] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:23:07.490+0000 D COMMAND [conn27] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:07.491+0000 I COMMAND [conn27] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:07.608+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:23:07.608+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:23:07.609+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:23:07.609+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:23:09.609Z 2015-04-01T16:23:07.659+0000 D COMMAND [conn27] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:07.659+0000 I COMMAND [conn27] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:07.710+0000 D COMMAND [conn27] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D50512321663A57336336652462432823706F6D3A) } 2015-04-01T16:23:07.711+0000 I COMMAND [conn27] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D50512321663A57336336652462432823706F6D3A) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:167 locks:{} 0ms 2015-04-01T16:23:07.842+0000 D COMMAND [conn27] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D50512321663A57336336652462432823706F6D3A5A676354594557394A6B6E78756C504F6E504138556D7061747077694E646D562C703D4170367A6771...) } 2015-04-01T16:23:07.842+0000 I COMMAND [conn27] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D50512321663A57336336652462432823706F6D3A5A676354594557394A6B6E78756C504F6E504138556D7061747077694E646D562C703D4170367A6771...) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:23:07.857+0000 D COMMAND [conn27] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } 2015-04-01T16:23:07.857+0000 I ACCESS [conn27] Successfully authenticated as principal bob on admin 2015-04-01T16:23:07.857+0000 I COMMAND [conn27] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:23:07.859+0000 D COMMAND [conn27] run command admin.$cmd { getLastError: 1 } 2015-04-01T16:23:07.859+0000 I COMMAND [conn27] command admin.$cmd command: getLastError { getLastError: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:110 locks:{} 0ms 2015-04-01T16:23:07.877+0000 D COMMAND [conn27] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:07.877+0000 I COMMAND [conn27] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:07.892+0000 D COMMAND [conn27] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:07.892+0000 I COMMAND [conn27] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:08.285+0000 D NETWORK [conn25] SocketException: remote: 127.0.0.1:63033 error: 9001 socket exception [CLOSED] server [127.0.0.1:63033] 2015-04-01T16:23:08.285+0000 I NETWORK [conn25] end connection 127.0.0.1:63033 (4 connections now open) 2015-04-01T16:23:08.285+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:63057 #28 (5 connections now open) 2015-04-01T16:23:08.289+0000 D QUERY [conn28] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:23:08.289+0000 D COMMAND [conn28] run command local.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D693461574A732F4172384969444344614B41557038554D667465553762746C57) } 2015-04-01T16:23:08.289+0000 I COMMAND [conn28] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D693461574A732F4172384969444344614B41557038554D667465553762746C57) } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:179 locks:{} 0ms 2015-04-01T16:23:08.317+0000 D COMMAND [conn28] run command local.$cmd { saslContinue: 1, payload: BinData(0, 633D626977732C723D693461574A732F4172384969444344614B41557038554D667465553762746C57776B6E685A6162767A716C692F57502B6C6139337645555A746A5A4768...), conversationId: 1 } 2015-04-01T16:23:08.317+0000 I COMMAND [conn28] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D693461574A732F4172384969444344614B41557038554D667465553762746C57776B6E685A6162767A716C692F57502B6C6139337645555A746A5A4768...), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:23:08.317+0000 D COMMAND [conn28] run command local.$cmd { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } 2015-04-01T16:23:08.317+0000 I ACCESS [conn28] Successfully authenticated as principal __system on local 2015-04-01T16:23:08.318+0000 I COMMAND [conn28] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:23:08.318+0000 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:08.318+0000 D COMMAND [conn28] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:08.318+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:08.322+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:23:08.322+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:23:08.322+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:23:10.322Z 2015-04-01T16:23:08.328+0000 D COMMAND [conn26] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:08.328+0000 D COMMAND [conn26] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:08.328+0000 I COMMAND [conn26] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:09.178+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:63059 #29 (6 connections now open) 2015-04-01T16:23:09.285+0000 D QUERY [conn29] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:23:09.285+0000 D COMMAND [conn29] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:09.286+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:09.286+0000 D COMMAND [conn29] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:09.286+0000 I COMMAND [conn29] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:09.287+0000 D COMMAND [conn29] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D78592B5B772E575F3C54482B5E23225332474F37) } 2015-04-01T16:23:09.287+0000 I COMMAND [conn29] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D78592B5B772E575F3C54482B5E23225332474F37) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:167 locks:{} 0ms 2015-04-01T16:23:09.379+0000 D COMMAND [conn29] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D78592B5B772E575F3C54482B5E23225332474F374C3152715474506A3767553779746974526A332F4B4A3031784C38503853614C2C703D686F46386948...) } 2015-04-01T16:23:09.379+0000 I COMMAND [conn29] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D78592B5B772E575F3C54482B5E23225332474F374C3152715474506A3767553779746974526A332F4B4A3031784C38503853614C2C703D686F46386948...) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:23:09.379+0000 D COMMAND [conn29] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } 2015-04-01T16:23:09.379+0000 I ACCESS [conn29] Successfully authenticated as principal bob on admin 2015-04-01T16:23:09.380+0000 I COMMAND [conn29] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:23:09.380+0000 D COMMAND [conn29] run command admin.$cmd { getLastError: 1 } 2015-04-01T16:23:09.380+0000 I COMMAND [conn29] command admin.$cmd command: getLastError { getLastError: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:110 locks:{} 0ms 2015-04-01T16:23:09.380+0000 D COMMAND [conn29] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:09.381+0000 I COMMAND [conn29] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:09.381+0000 D COMMAND [conn29] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:09.381+0000 I COMMAND [conn29] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:09.609+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:23:09.609+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:23:09.609+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:23:11.609Z 2015-04-01T16:23:10.319+0000 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:10.319+0000 D COMMAND [conn28] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:10.319+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:10.323+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:23:10.323+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:23:10.323+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:23:12.323Z 2015-04-01T16:23:10.329+0000 D COMMAND [conn26] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:10.329+0000 D COMMAND [conn26] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:10.329+0000 I COMMAND [conn26] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:10.555+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:10.555+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:10.556+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:10.556+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:10.956+0000 D REPL [rsBackgroundSync] bgsync buffer has 85688294 bytes 2015-04-01T16:23:11.448+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:63062 #30 (7 connections now open) 2015-04-01T16:23:11.452+0000 W NETWORK [conn30] no SSL certificate provided by peer 2015-04-01T16:23:11.454+0000 D QUERY [conn30] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:23:11.454+0000 D COMMAND [conn30] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:11.454+0000 I COMMAND [conn30] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:11.455+0000 D COMMAND [conn30] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:11.455+0000 I COMMAND [conn30] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:11.455+0000 D COMMAND [conn30] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D2A2E78335D3C612552307445347969417D795E74) } 2015-04-01T16:23:11.456+0000 I COMMAND [conn30] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D2A2E78335D3C612552307445347969417D795E74) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:167 locks:{} 0ms 2015-04-01T16:23:11.573+0000 D COMMAND [conn30] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D2A2E78335D3C612552307445347969417D795E74634D414D7A593230396C367A4E49704762757737684F7157494C486E67676E782C703D70786A44506F...) } 2015-04-01T16:23:11.573+0000 I COMMAND [conn30] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D2A2E78335D3C612552307445347969417D795E74634D414D7A593230396C367A4E49704762757737684F7157494C486E67676E782C703D70786A44506F...) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:23:11.590+0000 D COMMAND [conn30] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } 2015-04-01T16:23:11.590+0000 I ACCESS [conn30] Successfully authenticated as principal bob on admin 2015-04-01T16:23:11.591+0000 I COMMAND [conn30] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:23:11.591+0000 D COMMAND [conn30] run command admin.$cmd { getLastError: 1 } 2015-04-01T16:23:11.591+0000 I COMMAND [conn30] command admin.$cmd command: getLastError { getLastError: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:110 locks:{} 0ms 2015-04-01T16:23:11.592+0000 D COMMAND [conn30] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:11.592+0000 I COMMAND [conn30] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:11.593+0000 D COMMAND [conn30] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:11.593+0000 I COMMAND [conn30] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:11.609+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:23:11.609+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:23:11.609+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:23:13.609Z 2015-04-01T16:23:12.319+0000 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:12.319+0000 D COMMAND [conn28] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:12.319+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:12.323+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:23:12.323+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:23:12.323+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:23:12.323+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:23:14.323Z 2015-04-01T16:23:12.330+0000 D COMMAND [conn26] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:12.330+0000 D COMMAND [conn26] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:12.330+0000 I COMMAND [conn26] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:12.617+0000 D REPL [rsBackgroundSync] bgsync buffer has 85690568 bytes 2015-04-01T16:23:12.706+0000 D REPL [rsBackgroundSync] bgsync buffer has 85692236 bytes 2015-04-01T16:23:12.747+0000 D REPL [rsBackgroundSync] bgsync buffer has 85693826 bytes 2015-04-01T16:23:12.806+0000 D REPL [rsBackgroundSync] bgsync buffer has 85695416 bytes 2015-04-01T16:23:12.845+0000 D REPL [rsBackgroundSync] bgsync buffer has 85697006 bytes 2015-04-01T16:23:12.874+0000 D REPL [rsBackgroundSync] bgsync buffer has 85698587 bytes 2015-04-01T16:23:12.890+0000 D REPL [rsBackgroundSync] bgsync buffer has 85700174 bytes 2015-04-01T16:23:12.914+0000 D REPL [rsBackgroundSync] bgsync buffer has 85701758 bytes 2015-04-01T16:23:12.948+0000 D REPL [rsBackgroundSync] bgsync buffer has 85703384 bytes 2015-04-01T16:23:12.966+0000 D REPL [rsBackgroundSync] bgsync buffer has 85705010 bytes 2015-04-01T16:23:12.984+0000 D REPL [rsBackgroundSync] bgsync buffer has 85706599 bytes 2015-04-01T16:23:13.009+0000 D REPL [rsBackgroundSync] bgsync buffer has 85708230 bytes 2015-04-01T16:23:13.030+0000 D REPL [rsBackgroundSync] bgsync buffer has 85709860 bytes 2015-04-01T16:23:13.048+0000 D REPL [rsBackgroundSync] bgsync buffer has 85711450 bytes 2015-04-01T16:23:13.082+0000 D REPL [rsBackgroundSync] bgsync buffer has 85713058 bytes 2015-04-01T16:23:13.106+0000 D REPL [rsBackgroundSync] bgsync buffer has 85714666 bytes 2015-04-01T16:23:13.132+0000 D REPL [rsBackgroundSync] bgsync buffer has 85716296 bytes 2015-04-01T16:23:13.148+0000 D REPL [rsBackgroundSync] bgsync buffer has 85717906 bytes 2015-04-01T16:23:13.173+0000 D REPL [rsBackgroundSync] bgsync buffer has 85719535 bytes 2015-04-01T16:23:13.611+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:23:13.612+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27017, no events 2015-04-01T16:23:13.612+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:23:13.612+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:23:15.612Z 2015-04-01T16:23:13.756+0000 D NETWORK [conn29] SocketException: remote: 127.0.0.1:63059 error: 9001 socket exception [CLOSED] server [127.0.0.1:63059] 2015-04-01T16:23:13.756+0000 I NETWORK [conn29] end connection 127.0.0.1:63059 (6 connections now open) 2015-04-01T16:23:14.319+0000 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:14.319+0000 D COMMAND [conn28] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:14.319+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:14.323+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:23:14.323+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:23:14.323+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:23:16.323Z 2015-04-01T16:23:14.330+0000 D COMMAND [conn26] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:14.330+0000 D COMMAND [conn26] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:14.330+0000 I COMMAND [conn26] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:14.402+0000 D COMMAND [conn23] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:14.403+0000 I COMMAND [conn23] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:14.405+0000 D COMMAND [conn23] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:14.405+0000 I COMMAND [conn23] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:15.700+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:63070 #31 (7 connections now open) 2015-04-01T16:23:15.710+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:23:15.711+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:23:15.711+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:23:17.711Z 2015-04-01T16:23:15.885+0000 W NETWORK [conn31] no SSL certificate provided by peer 2015-04-01T16:23:16.204+0000 D QUERY [conn31] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:23:16.204+0000 D COMMAND [conn31] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:16.204+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:16.323+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:23:16.324+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:23:16.324+0000 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:16.324+0000 D COMMAND [conn28] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:16.328+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:23:18.328Z 2015-04-01T16:23:16.328+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 4ms 2015-04-01T16:23:16.337+0000 D COMMAND [conn26] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:16.337+0000 D COMMAND [conn26] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:16.337+0000 I COMMAND [conn26] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:16.443+0000 D COMMAND [conn31] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:16.443+0000 I COMMAND [conn31] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:16.500+0000 D COMMAND [conn31] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D3A7375312D236A5648734E4950605A287224585E) } 2015-04-01T16:23:16.500+0000 I COMMAND [conn31] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D3A7375312D236A5648734E4950605A287224585E) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:167 locks:{} 0ms 2015-04-01T16:23:16.658+0000 D COMMAND [conn31] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D3A7375312D236A5648734E4950605A287224585E6F48325A666C7933704D2B533162656144326962686356643472364B356738792C703D543365665572...) } 2015-04-01T16:23:16.658+0000 I COMMAND [conn31] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D3A7375312D236A5648734E4950605A287224585E6F48325A666C7933704D2B533162656144326962686356643472364B356738792C703D543365665572...) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:23:16.660+0000 D COMMAND [conn31] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } 2015-04-01T16:23:16.660+0000 I ACCESS [conn31] Successfully authenticated as principal bob on admin 2015-04-01T16:23:16.660+0000 I COMMAND [conn31] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:23:16.663+0000 D COMMAND [conn31] run command admin.$cmd { getLastError: 1 } 2015-04-01T16:23:16.663+0000 I COMMAND [conn31] command admin.$cmd command: getLastError { getLastError: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:110 locks:{} 0ms 2015-04-01T16:23:16.680+0000 D COMMAND [conn31] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:16.680+0000 I COMMAND [conn31] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:16.684+0000 D COMMAND [conn31] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:16.684+0000 I COMMAND [conn31] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:17.271+0000 D COMMAND [conn27] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:17.271+0000 I COMMAND [conn27] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:17.272+0000 D COMMAND [conn27] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:17.272+0000 I COMMAND [conn27] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:17.494+0000 D REPL [rsBackgroundSync] bgsync buffer has 85721125 bytes 2015-04-01T16:23:17.712+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:23:17.712+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:23:17.712+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:23:19.712Z 2015-04-01T16:23:17.900+0000 D REPL [rsBackgroundSync] bgsync buffer has 85723193 bytes 2015-04-01T16:23:18.191+0000 D REPL [rsBackgroundSync] bgsync buffer has 85725191 bytes 2015-04-01T16:23:18.328+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:23:18.328+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:23:18.328+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:23:18.328+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:23:20.328Z 2015-04-01T16:23:18.329+0000 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:18.329+0000 D COMMAND [conn28] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:18.329+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:18.338+0000 D COMMAND [conn26] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:18.338+0000 D COMMAND [conn26] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:18.338+0000 I COMMAND [conn26] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:19.747+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:23:19.748+0000 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2015-04-01T16:23:19.748+0000 D NETWORK [ReplExecNetThread-0] connected to server localhost:27017 (127.0.0.1) 2015-04-01T16:23:19.751+0000 W NETWORK [ReplExecNetThread-0] The server certificate does not match the host name localhost 2015-04-01T16:23:19.781+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:23:19.782+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:23:21.781Z 2015-04-01T16:23:19.852+0000 D REPL [rsBackgroundSync] bgsync buffer has 85727389 bytes 2015-04-01T16:23:19.939+0000 D REPL [rsBackgroundSync] bgsync buffer has 85729391 bytes 2015-04-01T16:23:20.328+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:23:20.328+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:23:20.328+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:23:22.328Z 2015-04-01T16:23:20.330+0000 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:20.330+0000 D COMMAND [conn28] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:20.330+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:20.341+0000 D COMMAND [conn26] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:20.341+0000 D COMMAND [conn26] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:20.341+0000 I COMMAND [conn26] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:20.451+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:63076 #32 (8 connections now open) 2015-04-01T16:23:20.518+0000 D QUERY [conn32] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} skip: 0 limit: 0, planSummary: COLLSCAN 2015-04-01T16:23:20.518+0000 D COMMAND [conn32] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:20.518+0000 I COMMAND [conn32] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:20.519+0000 D COMMAND [conn32] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:20.519+0000 I COMMAND [conn32] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:20.519+0000 D COMMAND [conn32] run command admin.$cmd { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D474225736373793C71232B3B7D737D6038615D5D) } 2015-04-01T16:23:20.519+0000 I COMMAND [conn32] command admin.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D626F622C723D474225736373793C71232B3B7D737D6038615D5D) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:167 locks:{} 0ms 2015-04-01T16:23:20.628+0000 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:20.629+0000 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:20.656+0000 D COMMAND [conn22] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:20.657+0000 I COMMAND [conn22] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:20.705+0000 D COMMAND [conn32] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D474225736373793C71232B3B7D737D6038615D5D78384A75487076755A762B655141414E307156443338354151547249612B53502C703D5274566D4976...) } 2015-04-01T16:23:20.705+0000 I COMMAND [conn32] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, 633D626977732C723D474225736373793C71232B3B7D737D6038615D5D78384A75487076755A762B655141414E307156443338354151547249612B53502C703D5274566D4976...) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:108 locks:{} 0ms 2015-04-01T16:23:20.706+0000 D COMMAND [conn32] run command admin.$cmd { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } 2015-04-01T16:23:20.706+0000 I ACCESS [conn32] Successfully authenticated as principal bob on admin 2015-04-01T16:23:20.706+0000 I COMMAND [conn32] command admin.$cmd command: saslContinue { saslContinue: 1, conversationId: 1, payload: BinData(0, ) } keyUpdates:0 writeConflicts:0 numYields:0 reslen:78 locks:{} 0ms 2015-04-01T16:23:20.707+0000 D COMMAND [conn32] run command admin.$cmd { getLastError: 1 } 2015-04-01T16:23:20.707+0000 I COMMAND [conn32] command admin.$cmd command: getLastError { getLastError: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:110 locks:{} 0ms 2015-04-01T16:23:20.709+0000 D COMMAND [conn32] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:20.709+0000 I COMMAND [conn32] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:20.710+0000 D COMMAND [conn32] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:20.710+0000 I COMMAND [conn32] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:20.864+0000 D NETWORK [conn32] SocketException: remote: 127.0.0.1:63076 error: 9001 socket exception [CLOSED] server [127.0.0.1:63076] 2015-04-01T16:23:20.864+0000 I NETWORK [conn32] end connection 127.0.0.1:63076 (7 connections now open) 2015-04-01T16:23:21.458+0000 D COMMAND [conn30] run command admin.$cmd { isMaster: 1 } 2015-04-01T16:23:21.459+0000 I COMMAND [conn30] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:398 locks:{} 0ms 2015-04-01T16:23:21.459+0000 D COMMAND [conn30] run command admin.$cmd { buildInfo: 1 } 2015-04-01T16:23:21.460+0000 I COMMAND [conn30] command admin.$cmd command: buildInfo { buildInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:744 locks:{} 0ms 2015-04-01T16:23:21.781+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:23:21.781+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:23:21.781+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:23:23.781Z 2015-04-01T16:23:22.328+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:23:22.328+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:23:22.328+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:23:24.328Z 2015-04-01T16:23:22.330+0000 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:22.330+0000 D COMMAND [conn28] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:22.330+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:22.341+0000 D COMMAND [conn26] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:22.341+0000 D COMMAND [conn26] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } 2015-04-01T16:23:22.341+0000 I COMMAND [conn26] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27017", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:148 locks:{} 0ms 2015-04-01T16:23:23.781+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27017 2015-04-01T16:23:23.781+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27017 was OK 2015-04-01T16:23:23.781+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27017 at 2015-04-01T16:23:25.781Z 2015-04-01T16:23:23.963+0000 D NETWORK [conn23] Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:63027 2015-04-01T16:23:23.963+0000 D NETWORK [conn23] SocketException: remote: 127.0.0.1:63027 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:63027] 2015-04-01T16:23:23.963+0000 I NETWORK [conn23] end connection 127.0.0.1:63027 (6 connections now open) 2015-04-01T16:23:23.964+0000 D NETWORK [conn22] Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:63022 2015-04-01T16:23:23.964+0000 D NETWORK [conn22] SocketException: remote: 127.0.0.1:63022 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:63022] 2015-04-01T16:23:23.964+0000 I NETWORK [conn22] end connection 127.0.0.1:63022 (5 connections now open) 2015-04-01T16:23:23.974+0000 D NETWORK [conn27] Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:63054 2015-04-01T16:23:23.974+0000 D NETWORK [conn27] SocketException: remote: 127.0.0.1:63054 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:63054] 2015-04-01T16:23:23.974+0000 I NETWORK [conn27] end connection 127.0.0.1:63054 (4 connections now open) 2015-04-01T16:23:23.974+0000 D NETWORK [conn30] Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:63062 2015-04-01T16:23:23.974+0000 D NETWORK [conn30] SocketException: remote: 127.0.0.1:63062 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:63062] 2015-04-01T16:23:23.975+0000 I NETWORK [conn30] end connection 127.0.0.1:63062 (3 connections now open) 2015-04-01T16:23:23.982+0000 D NETWORK [conn31] Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:63070 2015-04-01T16:23:23.982+0000 D NETWORK [conn31] SocketException: remote: 127.0.0.1:63070 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:63070] 2015-04-01T16:23:23.982+0000 I NETWORK [conn31] end connection 127.0.0.1:63070 (2 connections now open) 2015-04-01T16:23:24.312+0000 D NETWORK [conn26] Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:63045 2015-04-01T16:23:24.312+0000 D NETWORK [conn26] SocketException: remote: 127.0.0.1:63045 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:63045] 2015-04-01T16:23:24.312+0000 I NETWORK [conn26] end connection 127.0.0.1:63045 (1 connection now open) 2015-04-01T16:23:24.313+0000 I NETWORK [rsBackgroundSync] Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:27017 2015-04-01T16:23:24.313+0000 I NETWORK [rsBackgroundSync] SocketException: remote: 127.0.0.1:27017 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:27017] 2015-04-01T16:23:24.313+0000 D - [rsBackgroundSync] User Assertion: 10278:dbclient error communicating with server: localhost:27017 2015-04-01T16:23:24.313+0000 E REPL [rsBackgroundSync] sync producer problem: 10278 dbclient error communicating with server: localhost:27017 2015-04-01T16:23:24.328+0000 D REPL [ReplicationExecutor] Scheduling replSetHeartbeat to localhost:27019 2015-04-01T16:23:24.328+0000 D NETWORK [ReplExecNetThread-0] polling for status of connection to 127.0.0.1:27019, no events 2015-04-01T16:23:24.328+0000 D REPL [ReplExecNetThread-0] Network status of sending replSetHeartbeat to localhost:27019 was OK 2015-04-01T16:23:24.328+0000 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:27019 at 2015-04-01T16:23:26.328Z 2015-04-01T16:23:24.330+0000 D COMMAND [conn28] run command admin.$cmd { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:24.330+0000 D COMMAND [conn28] command: { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } 2015-04-01T16:23:24.330+0000 I COMMAND [conn28] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "repl0", pv: 1, v: 1, from: "localhost:27019", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:234 locks:{} 0ms